A Pervading Artifice: The Ethics of AI

In collaboration with Lachlan Chu

Introduction

With the progression of technology always comes the unavoidable uncertainties attributed to legitimacy and dependency. In an age prior to the industrial revolution of the 18th and 19th centuries, autonomy was a minor stir that possessed little power to influence or incite philosophical discourse. Artisanship, after long having taken center stage, was dethroned with the construction of primitive factories - the market revolution, as an active entity, continually molded the economy in ways which incentivized continental expansion as opposed to the previous locality of trade and self-sufficiency. In the liminal space of the developing period between post-industrial America and its predecessor, philosophical ideas regarding the future of the individual as an individual became more and more frequented and foregrounded. Apart from minor side steps, thinkers saw the shift from local to global and the institutionalization of factory work as a rough edge over which there were only two slopes: either humans would inculcate artifice into their progeny and imbue it in the fibers of their households, or they would reject production and progress in search for a locale whose center might be inset with the red gem of originality.

Clearly, the prior was chosen. Avarice, above any other motivation, was pursued, manifesting in the muscles that flipped levers and howled commands over spitting pipes and hot steam. In the industrial environment, every employee served the company with neo-nihilistic thoughts of power and ascension abundant in their minds. They expected the world to fall in place around their feet, and for a hill to be raised under them upon which they could stand. As the industry grew and aged, however, such thoughts became diluted condensates of past ambitions. Since the dawning of the twenty-first century, self-sufficiency has become smothered in corporate products, gagged by the callous hands of tradition, and fortified in a near-impenetrable film of modern apathy. At this point, it is nearly unavoidable that the reader has developed apprehensions around the purpose and direction of this article. In the contemporary age of automation and artificial intelligence, it is critical that we provide insight into a response to the question of AI production and its morality. That is not to say, however, that we will supply answers - of which there are many, and of which the entirety is subjective.

At this time, it is unlikely that one has not heard of the terms "generative AI" and "interactive AI." Regardless, the former is a general subdivision of artificial intelligence that has the capacity to draw from vast archives of qualitative data (be it images, videos, text, or other resources) in order to produce prompt-instructed pieces such as art, song lyrics, or creative writing. Interactive AI, often considered an even larger division than generative AI, is the conglomerate of all artificial intelligence that has the ability to engage in sustained conversation with humans and answer their questions or requests. As a basis for their eminent inclusion in a discussion about autonomy, we would like to first present the notion of dependency in contemporary life.

Dependency and Technology

Since the implementation of factories and technology specifically for human consumption and usage, we have been assisted by various forms of devices that strip away a sense of self-sufficiency and autonomy. Like how it is with all quantities, anything in excess can be harmful; regulation is always necessary. Many of the "human" accomplishments the modern era has to offer were only possible because of a number of minor technological assistees - for instance, the devices on which novels are written, the machine learning installations that run entrepreneurial furnaces, and the search engines and task management systems that maintain daily productivity. The trend, sadly, is that the amount of work we do to achieve our accomplishments is lessening; the ratio between original and outsourced work continues to lower, and self-sufficiency, even by contemporary standards, becomes increasingly far from reality.

With all that aside, it should be noted that self-sufficient civilizations are not as successful as interdependent ones are, and that a regression to self-sufficiency (at any level) would diminish the quality of life that we enjoy today. And it's not necessarily a bad thing to be interdependent; self-sufficiency is an obsolete concept simply because there is little motivation for it. With the development and cultivation of technology and the global market, we have not needed to produce for ourselves in centuries. Still, though, there remains the discussion about the following question: what is the meaning of property, and how will it change in the near future?

This discussion has become increasingly prominent with the rise of public AI such as Midjourney, Stable Diffusion, and ChatGPT that provide functions which invade the realm of ownership. Sans an OED citation, we can loosely define the word "property" as something that belongs to someone; similarly, the verb "to belong" most literally means "be the property of." Reasonably cyclical, these words are extremely difficult to define, especially now. With an emphasis on the meaning of property, ownership, autonomy, and self-sufficiency, we leave you with the following questions on which to generate newly-informed opinions and answers.

Questions for Reflection

  1. In the realm of the blockchain and non-fungible transactions, consider the content creator. What kind of a shift would their purpose undergo if placed in competition with accounts run by generative AI?
    1. Consider now the digital market as a whole, one whose main draws are its cryptocurrencies and NFTs. How can the implementation of generative AI destroy a market? How can an influx of quantity overrun a business?
  2. At what point should the development of sentient AI be terminated? When AI generation is at a point where it is indistinguishable from the work of a human master of the field, how should legitimacy and value be measured?
    1. Will the use of personal, trained AI systems in educational or competitive institutions ever be permitted as was that of the calculator, for example?
  3. Is it our obligation to pursue technology to its limit, with or without the intent of the betterment of society?