Obliterate That Bad Habit Of Dumbing Down Your Prompts When Using Generative AI So You Can Improve Your Results Tenfold

0

[ad_1]

I’m sure you’ve seen or heard people making gibberish childlike sounds when speaking to babies.

We tend to do so since we believe that the infant cannot otherwise comprehend our intelligible words. Various studies suggest that you ought to consider speaking normally to such youngsters as doing so can aid them in becoming tuned to natural language. They can potentially pick up on the cadence of regular English and begin to identify patterns such as where words start and end, where sentences start and end, and the like. You don’t have to always use everyday wording, but it seems prudent to intersperse those fun-to-make baby sounds with bona fide speech. Plus, maybe doing so will keep you sane and not cause you to lose touch with collegial adult lingo.

The gist of the situation is that at times we opt to dumb down our utterances.

One place where dumbing down is definitely a likely pitfall involves interacting with contemporary generative AI. Yes, it turns out that a lot of people using generative AI such as ChatGPT, GPT-4, Bard, Claude 2, and other akin AI apps fall into the trap of conversing with the AI in a dumbing down mode.

A person using generative AI often tends to restrict their wording to the simplest possible words. They enter as prompts a curt statement or overly brief question that may consist of a handful of words at most. I wouldn’t say that this counts as gibberish, though it is so short on substance that the phrasing appears as if you are adversely stuck in a desperately low communication mode or perhaps paying mightily for each character painstakingly entered into the AI.

In a sense, you are not to be blamed for your habit of having to convey your messaging in the shortest of phrasings. Blame the prior generation of AI and the less fluent natural language processing (NLP) capabilities that we have all endured for the last many years. Anyone who at first got excited about using Alexa or Siri was bound to pretty quickly become frustrated and altogether anguished. Whereas you might have been led to believe you could interact fluently, the reality was that you had to learn to constrain your commands and utterances.

It was and continues to be a nearly unbearable task. You might want to say that the AI should go ahead and raise the temperature to seventy-five degrees via your in-house temperature control device, meanwhile, the interpretation is that you said to turn on the outdoor porch lights. Frustrating and beguiling. Your only recourse was to dumb down what you utter. Speak slowly, one word at a time, and use the least amount of words possible. The words chosen have to be simplistic else the AI will get the whole string of words turned upside down and confabulated.

Okay, so we all opted as sentient beings to dumb down our discourse with AI.

Along comes generative AI. This type of NLP is heads taller than the prior versions. You can convey your thoughts in full sentences. Furthermore, the sentences can be rambling or otherwise filled with all manner of fluff. Generative AI can usually ferret out what you are saying or trying to say. No longer do you particularly have to speak the lingo of the machine. The machine generally speaks your lingo (well, within various boundaries).

In today’s column, I want to concentrate on the problem that people have been trained or self-trained to dumb down their interaction with AI, which is no longer needed per se when using modern generative AI. I realize you might be tempted to say that this is a no-harm and no-foul kind of condition. If you want to communicate in choppy short words, you can certainly do so. The big downside is that you are undercutting the true value of using generative AI. You are inadvertently shooting your own foot.

The bottom line is that if you interact with generative AI more fluidly, the odds are immensely heightened that you will get much better results. The essays you get generated are almost for sure going to be of a higher quality and closer to whatever you had in mind to obtain. The problem-solving by the AI is likely to be more surefire. Etc.

I would also add that your sense of well-being is decidedly going to rise. Here’s why. If you spend any substantial amount of time using generative AI and if you always have to be tricky and enter prompts of the shortest possible exchange, the chore is going to wear on you. A session with generative AI will seem to be endless and greatly tiring. The chances are that you will quietly in your mind decide to only use generative AI as a last resort.

On the other hand, if you use generative AI by entering prompts in an everyday natural style then the odds are that you will feel comfortable using the AI app. The effort will essentially be felt as effortless. Converse to your heart’s content. No need to bite your tongue or otherwise hold back as you write your requests. Just let it flow. The responses are going to be better and you will not expend undue energy using generative AI.

Seasoned users of generative AI have typically figured out that they can be expressive and that they do not need to hold themselves back in fluency. In fact, they often watch in rapt fascination when a newbie or someone who only occasionally uses generative AI opts to write in three-word or four-word sentences. It can be laughable. I would hope that any such seasoned user might extend a hand of helpfulness and explain to the unaware that they can type as they might normally speak.

Please adopt a pass-it-forward mantra in life, including aiding others who want to make use of generative AI.

During my workshops on prompt engineering, I often start by having attendees showcase how they have used generative AI or attempt to use the AI for the first time. Right away the shortness of prompts becomes apparent. The aim is to get everyone on board with using fluent prompts. Once we’ve broken the old habit of terse prompting, we can then move into learning the numerous techniques of prompting that can really make generative AI results shine.

I’d like to walk you through my nine steps to overcome old habits of stilted prompting. A newbie can find this quite instructive. They are being permitted to toss away the shackles of having used the constrained commanding structure of the likes of Alexa and Siri. A new sense of freedom is discovered. Seasoned users who already have gone the route of becoming fluent will potentially also benefit from considering the nine steps. We all tend to fall back into ruts, for which the nine steps can aid in keeping you out of those weeds.

Being on top of your game when it comes to the prompting and the use of generative AI is a prudent and significant approach that consists of overcoming old habits and forming suitable and useful new ones.

Before I dive into my in-depth exploration of this vital topic, let’s make sure we are all on the same page when it comes to the foundations of prompt engineering and generative AI. Doing so will put us all on an even keel.

Prompt Engineering Is A Cornerstone For Generative AI

As a quick backgrounder, prompt engineering also referred to as prompt design is a rapidly evolving realm and is vital to effectively and efficiently using generative AI or the use of large language models (LLMs). Anyone using generative AI such as the widely and wildly popular ChatGPT by AI maker OpenAI, or akin AI such as GPT-4 (OpenAI), Bard (Google), Claude 2 (Anthropic), etc. ought to be paying close attention to the latest innovations for crafting viable and pragmatic prompts.

For those of you interested in prompt engineering or prompt design, I’ve been doing an ongoing series of insightful explorations on the latest in this expanding and evolving realm, including this coverage:

  • (1) Imperfect prompts. Practical use of imperfect prompts toward devising superb prompts (see the link here).
  • (2) Persistent context prompting. Use of persistent context or custom instructions for prompt priming (see the link here).
  • (3) Multi-personas prompting. Leveraging multi-personas in generative AI via shrewd prompting (see the link here).
  • (4) Chain-of-Thought (CoT) prompting. Advent of using prompts to invoke chain-of-thought reasoning (see the link here).
  • (5) In-model learning and vector database prompting. Use of prompt engineering for domain savviness via in-model learning and vector databases (see the link here).
  • (6) Chain-of-Thought factored decomposition prompting. Augmenting the use of chain-of-thought by leveraging factored decomposition (see the link here).
  • (7) Skeleton-of-Thought (SoT) prompting. Making use of the newly emerging skeleton-of-thought approach for prompt engineering (see the link here).
  • (8) Show-me versus tell-me prompting. Determining when to best use the show-me versus tell-me prompting strategy (see the link here).
  • (9) Mega-personas prompting. The gradual emergence of the mega-personas approach entails scaling up the multi-personas to new heights (see the link here).
  • (10) Certainty and prompts. Discovering the hidden role of certainty and uncertainty within generative AI and using advanced prompt engineering techniques accordingly (see the link here).
  • (11) Vague prompts. Vagueness is often shunned when using generative AI but it turns out that vagueness is a useful prompt engineering tool (see the link here).
  • (12) Prompt catalogs. Prompt engineering frameworks or catalogs can really boost your prompting skills and especially bring you up to speed on the best prompt patterns to utilize (see the link here).
  • (13) Flipped Interaction prompting. Flipped interaction is a crucial prompt engineering technique that everyone should know (see the link here).
  • (14) Self-reflection prompting. Leveraging are-you-sure AI self-reflection and AI self-improvement capabilities is an advanced prompt engineering approach with surefire upside results (see the link here).
  • (15) Addons for prompting. Know about the emerging addons that will produce prompts for you or tune up your prompts when using generative AI (see the link here).
  • (16) Conversational prompting. Make sure to have an interactive mindset when using generative AI rather than falling into the mental trap of one-and-done prompting styles (see the link here).
  • (17) Prompt to code. Prompting to produce programming code that can be used by code interpreters to enhance your generative AI capabilities (see the link here).
  • (18) Target-your-response (TAR) prompting. Make sure to consider Target-Your-Response considerations when doing mindful prompt engineering (see the link here).
  • (19) Prompt macros and end-goal planning. Additional coverage includes the use of macros and the astute use of end-goal planning when using generative AI (see the link here).
  • (20) Tree-of-Thoughts (ToT) prompting. Showcasing how to best use an emerging approach known as the Tree of Thoughts as a leg-up beyond chain-of-thought prompt engineering (see the link here).
  • (21) Trust layers for prompting. Generative AI will be surrounded by automated tools for prompt engineering in an overarching construct referred to as an AI trust layer, such as being used by Salesforce (see the link here).
  • (22) Directional stimulus prompting (aka hints). The strategic use of hints or directional stimulus prompting is a vital element of any prompt engineering endeavor or skillset (see the link here).
  • (23) Invasive prompts. Watch out that your prompts do not give away privacy or confidentiality (see the link here).
  • (24) Illicit prompts. Be aware that most AI makers have strict licensing requirements about prompts that you aren’t allowed to make use of and thus should avoid these so-called banned or illicit prompts (see the link here).
  • (25) Chain-of-Density (CoD) prompting. A new prompting technique known as Chain-of-Density has promising capabilities to jampack content when you are doing summarizations (see the link here).
  • (26) Take-a-deep-breath prompting. Some assert that if you include the line of taking a deep breath into your prompts this will spur AI to do a better job (see the link here).
  • (27) Chain-of-Verification (CoV) prompting. Chain-of-Verification is a new prompting technique that seeks to overcome AI hallucinations and force AI into self-verifying its answers (see the link here).
  • (28) Beat the Reverse Curse. Generative AI does a lousy job of deductive logic, especially regarding initial data training, a malady known as the Reverse Curse, but there are ways to beat the curse via sound prompting (see the link here).

Anyone stridently interested in prompt engineering and improving their results when using generative AI ought to be familiar with those notable techniques.

Moving on, here’s a bold statement that pretty much has become a veritable golden rule these days:

  • The use of generative AI can altogether succeed or fail based on the prompt that you enter.

If you provide a prompt that is poorly composed, the odds are that the generative AI will wander all over the map and you won’t get anything demonstrative related to your inquiry. Being demonstrably specific can be advantageous, but even that can confound or otherwise fail to get you the results you are seeking. A wide variety of cheat sheets and training courses for suitable ways to compose and utilize prompts has been rapidly entering the marketplace to try and help people leverage generative AI soundly. In addition, add-ons to generative AI have been devised to aid you when trying to come up with prudent prompts, see my coverage at the link here.

AI Ethics and AI Law also stridently enter into the prompt engineering domain. For example, whatever prompt you opt to compose can directly or inadvertently elicit or foster the potential of generative AI to produce essays and interactions that imbue untoward biases, errors, falsehoods, glitches, and even so-called AI hallucinations (I do not favor the catchphrase of AI hallucinations, though it has admittedly tremendous stickiness in the media; here’s my take on AI hallucinations at the link here).

There is also a marked chance that we will ultimately see lawmakers come to the fore on these matters, possibly devising and putting in place new laws or regulations to try and scope and curtail misuses of generative AI. Regarding prompt engineering, there are likely going to be heated debates over putting boundaries around the kinds of prompts you can use. This might include requiring AI makers to filter and prevent certain presumed inappropriate or unsuitable prompts, a cringe-worthy issue for some that borders on free speech considerations. For my ongoing coverage of these types of AI Ethics and AI Law issues, see the link here and the link here, just to name a few.

With the above as an overarching perspective, we are ready to jump into today’s discussion.

Overcoming Old Habits Entailing AI Interactions

Let’s take a look at a quick example to illustrate the issues associated with underutilizing generative AI by being stuck in the old ways of choppy commands and stilted interactions.

Suppose that someone named Michael wanted to go hiking in the Grand Canyon. Turns out that Michael will be accompanied by his father. The father is a bit older and has had some knee issues. The hike therefore should be on a trail that would be less taxing and accommodate a safe journey. At the same time, they both want to enjoy the breathtaking scenery and not simply a slow-paced turtle walk along the leveled rim. The aim is to find a trail that would be simultaneously suitable for the two of them and yet not be dangerous and nor be excessively tame.

Michael logged into ChatGPT and opted to use a short request to find out about hiking in the Grand Canyon. The assumption for this request is that the generative AI is akin to something like Alexa or Siri whereby your best bet is to be short and sweet on your queries or commands.

  • This is the prompt entered: “Tell me about hiking in the Grand Canyon.”
  • Here is the response by ChatGPT: “Hiking in the Grand Canyon is a popular and awe-inspiring outdoor adventure that attracts visitors from around the world. The Grand Canyon, located in Arizona, USA, is one of the most iconic and stunning natural wonders on the planet. It offers a wide range of hiking opportunities, from short day hikes to multi-day backpacking trips. It is important to be well-prepared, stay safe, and respect the natural beauty of this remarkable place. Whether you’re planning a day hike or an extended backpacking trip, it’s a journey that offers breathtaking vistas and a deep connection to the natural world.”

You can readily discern from the response that ChatGPT is extremely generic about the Grand Canyon and how to take hikes there. The generated content is probably on par with doing an Internet search and landing on a breezy website that touts the amazing vistas and scenery of the Grand Canyon.

Some people using generative AI would at this point opt to discontinue using the AI in this context due to assuming that the extent of available responsiveness about the Grand Canyon has been reached. In their mind, the effort to use the AI app would seem of little added value. All they are presumably going to get is vanilla-flavored responses.

You cannot especially blame the AI app for having provided a bland answer. The prompt was bland and thus the response was bland. If you want to get generative AI to be more expressive, you have to provide grist for the mill, as it were. The point is that you can open the dialogue by sharing sufficient details to lean the GenAI toward what you are trying to figure out or get produced.

Being an experienced user of generative AI, Michael realized that a more detailed and personalized prompt would be needed. There is a solid chance that ChatGPT will be able to produce a much more on-target indication about hiking at the Grand Canyon by being given clues or indications of what the backstory is and what is being considered.

  • Here is his new prompt: “I would like to go hiking in the Grand Canyon and do so with my father. He is older and has some weakness in his knees. We want to hike on a trail that would be scenic and somewhat arduous, but at the same time be safe for both of us and fun to hike. Which trail do you recommend that we hike on and what should we do in terms of preparing for the hike? Provide specific details so that we can know what to prepare for and how to proceed with the hike.”
  • Response by ChatGPT: “Considering your father’s age and the potential knee weakness, it’s essential to choose a trail that offers both scenic beauty and safety. The Bright Angel Trail is a popular choice because it provides stunning views and is well-maintained. The Bright Angel Trail is one of the more accessible and well-maintained trails in the Grand Canyon. It offers stunning views of the canyon and allows hikers to experience the dramatic landscape without necessarily descending all the way to the Colorado River. Resthouses are strategically located along the trail, providing places to rest and access water during your hike. Due to its popularity, it’s generally considered a safer option with more hikers around in case of any issues.”
  • “Preparation includes: (1) Get a Backcountry Permit: If you plan to descend beyond 1.5 miles (2.4 km) down the trail, you’ll need a backcountry permit. You can apply for this permit through the National Park Service. For day hiking within the first 1.5 miles of the trail, a permit isn’t required. (2) Plan Your Route: Decide how far down the Bright Angel Trail you and your father want to hike. Popular options include the 1.5-mile Resthouse or the 3-mile Resthouse, which are good turnaround points for day hikers. Consult the park’s trail map and consider your father’s physical condition when choosing your destination. (3) Know the Weather: Check the weather forecast for the day of your hike. It’s essential to be prepared for the possibility of changing weather conditions, even during the day. (4) Walking Poles: Consider bringing hiking poles, which can be helpful for stability and reducing strain on the knees.”

The generated indication about hiking the Bright Angel Trail makes a lot of sense (for those of you who have hiked the Grand Canyon, you know there are numerous trails and that the Bright Angel Trail would be a good choice for the particular circumstance of Michael and his father).

A few subtle tailored recommendations also arise in the ChatGPT response. One is that the response suggests using walking poles. If you look closely at the depiction, the response says that using hiking poles can reduce the strain on knees. The odds are that this is not simply a generic indication (which, admittedly, could be), but more likely was mentioned as a result of the prompt that brought up the situation of the father’s knees.

The response overall by ChatGPT is pretty good and provides specific handy insights. At this juncture, someone stuck in old habits might quit the conversation because they assume that everything that could be said on the topic has now been said by the AI.

Keep ever-present in mind that generative AI is all about interaction. The best way to garner full value from using generative AI is to customarily carry on a conversation.

That is what Michael opted to do.

  • Follow-up prompt: “Is there anything we should do before the hike? I want to see if there is anything I should be doing or that I could suggest that my father do for the trip, thanks.”
  • Response by ChatGPT: “Before the hike, consider consulting with a doctor if your father has any medical conditions. Additionally, some light exercises and stretches can help prepare for the hike and reduce the strain on the knees. Working with a physical therapist or personal trainer can be beneficial to ensure your father is physically ready. Remember that safety is paramount, so don’t push your father beyond his physical capabilities. The Grand Canyon offers incredible views even from the upper portions of the trail, so you can still have a fantastic and safe hiking experience without descending all the way to the river. Enjoy your adventure, stay safe, and cherish the memories you’ll create in this breathtaking natural wonder.”
  • Additional prompt: “Besides the Bright Angel Trail, can you tell me about other trails at the Grand Canyon? I might consider harder trails. Also, in the future, I might be going with experienced hikers and would want to pursue a trail of greater difficulty.”
  • Response by ChatGPT: “There are several popular hiking trails in the Grand Canyon, including (1) Bright Angel Trail: One of the most famous trails, it descends from the South Rim and offers water and resthouses along the way. (2) South Kaibab Trail: Another popular South Rim trail with stunning views but fewer amenities than Bright Angel. (3) North Kaibab Trail: The primary trail from the North Rim to the Colorado River, offering different perspectives of the canyon. (4) Havasu Falls Trail: A trail leading to the iconic Havasu Falls, which requires permits and is known for its stunning blue-green waterfalls. (5) Grandview Trail: Offers a more challenging hike with unique views of the canyon.”

The conversation could have kept going.

A seasoned user of generative AI will keep the dialogue running until they believe that they have uncovered or discovered whatever remaining morsels might be of use. You don’t always have to keep probing and should be selective as to when it makes sense to get engaged in a conversation and when not to do so.

The emphasis is that you should have at the top of your mind to engage in a conversation, rather than neglecting to consider the possibility. No need to anymore do merely a one-and-done query. Those are based on old habits and were ingrained due to rightfully curtailing an agonizing and exasperating attempt at spurring stilted old-time NLP AI to come up to the task at hand.

Breaking The Old Habits These Via These Nine Steps

If you are just starting out with using generative AI, I’ve got nine easy-peasy steps that can help you overcome any prior bad habits when it comes to using AI. For those of you who are seasoned users of generative AI, take a look at the nine steps and they might helpfully remind you of how to avoid falling back into an old rut. I will showcase the nine steps and then provide a brief overall explanation of them.

Here are nine steps to overcome old bad habits and end up using GenAI soundly and smartly:

  • (1) Be cognizant of old bad habits. You will need to set aside the irksome instilled habit of using Alexa or Siri whereby you reduce your wording to babytalk, and instead revert to full sentences and everyday fluency when using modern generative AI.
  • (2) Think big. Try to imagine that you are launching into a full-on dialogue with the AI. What would you like to ask? What context is important for the GenAI to be informed of? And so on.
  • (3) Make a conversation. No more of the one-and-done types of curt queries. Anticipate that you’ll enter a prompt, the AI will respond, you will respond with another prompt, the AI will react to that, etc. It is a conversation in the making.
  • (4) Challenge the AI. With old habits, it seemed that you needed to coddle the NLP AI and hold your tongue to prevent the AI from getting confounded. GenAI is generally able to handle being challenged. You can tell the AI its answer is not useful or that you even believe the answer to be wrong. Go for it.
  • (5) Flip the interaction. A clever technique of prompt engineering involves getting the GenAI to ask you questions (see my elaboration at the link here). If you are the type of person who has a hard time bearing half of a conversation, you can make things easier for yourself by simply getting the GenAI to ask you about whatever it is you want to discuss.
  • (6) Vary your interactivity. You can use long sentences some of the time and also proudly use short sentences some of the time. I don’t want you to forsake short sentences. The overarching idea is to be causally normally conversational. Mix things up.
  • (7) Emotional is okay. There is a usual tendency to converse with NLP AI in strictly non-emotional ways. For GenAI, you can toss in all kinds of side stories, compliments, insults, or whatever might seem natural to your way of interacting. That being said, please do not go overboard and become churlish.
  • (8) Seek to adopt new habits. At first, it might be hard to avoid falling into the curtness trap of old habits. Keep punching away at the new approach. The more you use it, the chances are that it will become your new habit. You hopefully will eventually proceed naturally and not have to be concerned with your now discarded old ways.
  • (9) Judiciousness counts. A smarmy person might insist that sometimes being curt and doing a one-and-done is perfectly fine. I agree! There are times when the old way is still useful. The viewpoint here is that you should be aware of what you are doing and be able to be fluent in your prompting when suitable to do so. Judiciousness in how you compose and use prompts is the key.

I walk people through those nine steps during my workshops on prompt engineering for generative AI. The steps plainly spell out the conception that you are probably mired in old bad habits of interacting with NLP AI. By taking the above steps, you can breathe new life into how you are using or going to use generative AI.

As stated in the steps, first you need to realize that you might be stuck in the old ways. Next, you should allow yourself to mentally think in full-bodied interactions and proceed to converse with GenAI in that mental mode. It is fine to challenge GenAI and provoke a conversation. It is fine to do a flipped interaction involving having GenAI ask you questions. Lots of techniques can prod generative AI into being more engaged and ostensibly revealing during conversations.

The big picture hope is that you will become used to carrying on full conversations with GenAI. No extra mental effort will be required to overcome those older choppy sentences. They will be gone from your normal repertoire. That being the case, this does not mean that you cannot ever use choppy sentences.

Be judicious and use sentences and wordings that vary and benefit the discussion underway.

Conclusion

I would like to clarify that I am not suggesting that generative AI can converse on par with human conversations. You will always still need to keep your guard up. GenAI can get lost during a conversation and go into tangents that you didn’t intend to invoke. The possibilities of generative AI emitting falsehoods, biases, and so-called AI hallucinations are something you need to be wary of.

There is though a sense of relief that you can avoid baby talk and almost carry on everyday conversational ins and outs with generative AI. Advances in GenAI will continue to improve this capability. The amount of fluency will increasingly be amazing and startling. Whatever you do, please don’t interpret or misinterpret the GenAI fluency to imply sentience. It is all too easy to do so. Remember at all times that you are conversing with a machine and not a human.

Another quick heads-up relates to what you enter as prompts into GenAI. Be cautious. I say this because most people don’t seem to realize that the usual licensing agreements for most generative AI apps allow the AI maker to see the entered prompts, including making use of the entered content for further data training of the GenAI (see my coverage at the link here). Do not enter material that you might consider confidential or private. Assume that whatever prompts you enter could someday be banner headlines on the front page news, which I hope doesn’t happen to you and that the odds are fortunately slim. I trust that you get my strident warning and will be guarded accordingly.

A final thought for now on this weighty matter.

William Shakespeare said this about having conversations: “Conversation should be pleasant without scurrility, witty without affectation, free without indecency, learned without conceitedness, novel without falsehood.”

Try to uphold that piece of sage advice, and maybe generative AI will do likewise.

[ad_2]

Source link

You might also like
Leave A Reply

Your email address will not be published.