If we all remember, prompt engineering became a hot topic in the early days following the release of generative AI models like ChatGPT and Stable Diffusion, as people quickly recognized the value of crafting complex prompts to get the best outputs. There are even careers dedicated to Prompt Engineering, highlighting the importance of this skill in maximizing AI’s potential across various industries.
For older AI models, prompting techniques like few-shot prompting and chain-of-thought reasoning were very helpful for tasks that needed careful decision-making and understanding of complex ideas. These methods guided the model through each step, making it better at handling difficult questions. As a result, they allowed the models to perform more accurately and reliably in challenging scenarios.
However, with today’s advanced models, prompt engineering has evolved. Now, we only need very simple and precise prompts for advanced models like Claude 3.5 Sonnet, OpenAI GPT-4o, and Flux.1 to achieve high-quality generation. We just need to clearly state what we want from the LLM or other generative AI, and it will typically provide good output. This shift has made it easier for users to interact with AI, reducing the complexity that was once necessary for effective results. There’s no need to coax the LLM to get what we want.
In today’s software development, prompting LLMs works similarly. Just specify your needs clearly. For example, when writing code, simply state the programming language required for your objective. There is no need to start the prompt with, “You are a senior software engineer; please write a function to solve ABC... and break the problem down step by step".
Let's give it a try on some of the most powerful Large Language Model.
Let's create a more comprehensive app using a natural language prompt:
I want to build a Python app that takes user questions, searches for similar questions in a database with existing answers, and returns the answer if a close match is found. If no match is found, it will prompt the user for an answer and save both the question and answer in the database.
Plan a directory structure for this app, then create and detail each file.
As you can see above, the responses are quite good without using any prompt engineering techniques. In the earlier days, the code might have lacked comments, explanations, or the LLM would simply return code in another language. This improvement reflects the advancements in natural language processing and the model's ability to understand user intent more effectively.