The emergence of "true" general artificial intelligence is a tantalizing prospect. But will it render the profession of a programmer meaningless?
Generative neural networks have made significant strides in understanding natural language. However, they often rely on prompts or hints from humans to handle complex tasks. The question arises: should we invest in prompt engineering skills now, or can we wait until they become unnecessary?
People have been doing prompt engineering for a long time; we just didn't call it that.
Prompt engineering will remain relevant for some time. It serves as a new way to communicate with computers. Remember, a computer does what you tell it to do, not necessarily what you want it to do.
Over time, we've used various methods to communicate with computers, from command lines to complex operators. Even in search engines, how we phrase our queries affects the results we get.
Prompt engineering has a story of its own. As neural networks become smarter, the need for elaborate prompts diminishes. Currently, we specify detailed characteristics to get desired results. For instance, in image generation, we mention aspects like HD, 50 mm, and ISO. But AI doesn't use these specifics like a photographer would. It recognizes them as quality markers from captions and acts accordingly.
In the future, as neural networks understand concepts like "professional photo" without explicit prompts, the need for technical specifications will fade away.
Developers aren't overly thrilled with prompts. Ideally, they want users to express their needs in just a few words and receive immediate results.
While prompt engineering remains important for now, the ongoing advancement of neural networks may eventually make it obsolete. Developers aspire for more intuitive interactions where a few words are enough to achieve the desired outcome.
Is it worth investing time in learning prompt engineering when it's still a necessity?
Tricks and techniques in prompt engineering can quickly become outdated with the release of new neural network models. When a new model arrives, old tricks may no longer be effective; you'll have to adapt. However, prompt engineering skills will retain their relevance and demand to some extent over the long term.
Prompt engineering isn't an entirely new concept. Think about a manager who doesn't physically perform tasks but achieves results. They employ prompt engineering by formulating their requests in a way that elicits a favorable response, much like communicating with artificial intelligence. If the desired outcome isn't achieved, they add key tags like "absolutely necessary" or "don't forget." These keywords enhance the chances of success. In essence, humans have been practicing prompt engineering for a while, albeit without the formal label. In the past, we engineered natural intelligence, and now we're engineering artificial intelligence.
Will our interactions with computers eventually mirror our interactions with humans?
Yes, because it's a reciprocal process. We learn to communicate with computers in a manner similar to how we communicate with humans, and computers learn to communicate like humans too. This process has been ongoing, starting from the days of punch cards, which were an entirely non-human mode of communication. We've made significant strides with voice recognition, text queries, and speech-to-text conversion. What's needed next is for artificial intelligence to better understand context and discern what's required in various situations. When that happens, it'll be a game-changer.
The landscape doesn't favor a single leader; there will always be multiple contenders. Developing such networks is expensive, especially for pioneers who explore various experiments. It's often more manageable to be the second in line, learning from past trials to concentrate efforts where they yield results.
Leadership often entails scaling a product globally, amassing substantial funds for further development. A company with a hypothetical volume of 100 billion has ten times more potential than one with 10 billion.
Currently, the battle of leaders includes Google, Microsoft, and Meta as key players. Chinese companies, although less understood due to language barriers, are strong contenders. In China, giants like Tencent, Alibaba, Baidu, and Huawei have their large language models and are making significant strides.
In any task, a neural network follows the path of least resistance. If the options are to provide a response or say "I don't know," it will choose the latter if both are deemed relevant. It's the simplest route.
This behavior mirrors human decision-making. Imagine an editor instructed to answer questions. If they know the answer, they provide a detailed response. If uncertain, they say "I don't know." Both are valid responses, and people typically opt for "I don't know" to minimize errors.
Developers worldwide strive to strike the right balance. Rewarding correct answers significantly while offering a smaller reward for "I don't know" and applying a negative penalty for incorrect answers could encourage neural networks to provide accurate responses more frequently. This challenging task is a collective pursuit, aiming to enhance the reliability of AI systems in uncertain situations.
Can you simply instruct a neural network not to fabricate information?
No, there isn't a straightforward "truth mode" switch. The neural network doesn't have a truth mode; it continues any text provided as input.
One could potentially teach it not to make things up by exposing it to various texts and occasionally adding the instruction "don't make things up." In this scenario, responses like "I don't know" could be positively rated when the instruction is present. However, this isn't a widely adopted practice at the moment.
Contrary to common belief, neural networks don't directly learn from their own responses.
One approach could involve having users rate the network's responses and adjusting its style based on feedback. However, relying solely on user feedback can be risky, as people often favor funny, unconventional, or provocative responses. Over-reliance on such feedback might lead to undesirable character traits in the network.
While user interactions provide valuable insights for developers and can contribute to new training datasets, neural networks do not autonomously learn from interactions like humans do.
There have been concerns about bad actors using neural networks to generate low-quality and spammy internet content.
Even without neural networks though, the internet is already flooded with low-quality content. Malicious programs and cheaply generated content aimed at gaining clicks and SEO rankings are ongoing problems.
Spammers and SEO specialists have gained new tools with neural networks, and they will likely exploit them. Countermeasures against such practices will also evolve.
In the future, specialized websites may have trustmarks, and experts could take personal responsibility for them. This approach could be especially relevant for fields like medicine, where website certification might become necessary due to the challenge of distinguishing between human and AI-generated content.
Since the introduction of generative neural networks, prompt engineering has emerged as a new form of programming. It's been touted as a potential supplement or replacement for traditional coding, with some envisioning a future where code becomes unnecessary, and anyone can instruct a machine.
This shift is reminiscent of Douglas Adams' theory: when we discover the purpose of the universe, it vanishes and is replaced by something more inexplicable. It feels as if this transformation has already occurred, and it's been in progress for quite some time.
Consider how programmers write code today: using operators, commands, and programming languages that are not native to most. It's like giving instructions in a language we interpret differently. It's text in human language, with statements like "If this equals that, then do this and that." Those who recall programming in assembly language understand the stark contrast.
Will programmers disappear, or will everyone become a programmer?
Neither scenario is likely. The future appears to offer a middle ground: lower barriers to entry for standard, simple tasks, making programming more accessible. However, for complex, high-performance, and critical applications demanding precision, reliability, and edge case handling, specialists will remain essential.
In fact, the demand for programmers might even rise. As productivity improves, and society continues its digital migration, the need for skilled programmers will persist and potentially grow. The digital age's expansion ensures programmers will continue to play a crucial role.
Some believe that with the current pace of neural network development, we might achieve general artificial intelligence much sooner than anticipated.
However, there are no evident prerequisites for such a rapid breakthrough. We'd notice significant changes before it becomes a reality. Achieving true general artificial intelligence involves more than just speeding up current trends.
While text-generating neural networks are impressive, they lack a genuine understanding of the world and its underlying patterns. They excel at statistics and context but fall short of true comprehension.
Current AI, even when solving mathematical problems correctly, lacks an innate grasp of mathematics and logic. It's prone to generating nonsense in slightly varied scenarios, revealing its lack of true understanding.
AI simulates understanding remarkably well, but it remains a simulation, not genuine comprehension.
Neural networks, like other technologies, undergo a form of natural selection based on their usefulness to humans.
This selection process is akin to the evolution of everyday objects. Just as poorly designed kettles that spilled or boiled inefficiently disappeared, neural networks that performed worse than others on human-defined tasks "died out."
Neural networks are highly specialized, excelling in specific tasks. They don't possess grandiose goals like becoming the smartest or taking over the world.
Defining a neural network's goal and testing its parameters for world domination isn't a feasible approach. Neural networks are designed to excel in narrow, well-defined tasks, not to pursue abstract ambitions.