Social Media Attribution For Artificially Generated Content, Part V/V

Prompt Engineering Icon
Prompt Engineering Image, Created by Microsoft Copilot (Powered byDALL-E 3) from Microsoft

Addendum, Appropriate Prompt Engineering

To repeat what was said in part I/V, in Generative AI users interact with the Large Language Model (LLM) through a chatbot, entering textual instructions in natural language or pseudocode in a process known as prompt engineering. The goal of prompt engineering is the design of optimal prompts given the LLM and subject. At this juncture, I am reminded of the potential for residual rights accruing to the prompter, as the interpretation of current laws is unsettled for original AI creations, resulting from the skill and effort involved in engineering unique/distinctive prompts. Nevertheless, it is better that the enquirer/prompter have some knowledge of the subject domain in which he/she is interested to produce an appropriate prompt goal and to be able to gauge responses. Alternatively, a prompter can acquire prompts (free or purchased) in his/her subject domain. Designed prompts are generally available online and some come with terms of service that warn “Users should not copy, modify, distribute, or resell (their) Chatgpt Prompts without the platform’s (prompt provider) permission,” and “Users should not share their account or Chatgpt Prompts with others unless they have the platform’s (prompt provider) consent.”

Prompt Elements

However, for starters, the intrepid can forge ahead: entering simple prompts such as “What day is today?” or, a step up, a syllogism e.g., “All dogs are animals; all animals have four legs; how many legs does a dog have?” or a complex prompt about his/her tax withholding for the current year after typing or pasting in the tax code, assuming the LLM is unaware of it, in the chatbot, or even entering the description of an object or scene and requesting an image generation, &c. Usually, a prompt can have a combination of the following elements:

1.    Input/Context: e.g., I am on a Caribbean island and I plan on going to the beach this weekend

2.   Instructions: e.g., check the forecast with the meteorological office

3.   Question: e.g., Will it rain on Saturday or Sunday? What will be the forecasted low and high temperatures?

4.   Examples (few shot):  e.g.,

Good news, Monday, sunny, low 25C, high 32C

Bad news, Tuesday, rain, low 20C, high 30C

5.   Output Format: Put the answer in tabular form

This author divides the prompt responses into information retrieval/translation, mimicked reasoning/calculations, predictions, and mimicked creations, i.e., aside from hallucinations. Information retrieval is based on model training or in-context learning (ICL) as we have noted before. Reasoning is the explanation for the cause of some phenomena. Prediction is the extrapolation of a dependent variable with respect to an independent variable. Ingenuity (artworks, music, and literary fiction,) for the model, is the assembly or presentation of pre-existing knowledge/elements (Creativity does not start with a void—Mary Shelly) in new, authentic, and interesting ways.

Necessity to Control Chatbot Response

Above all, due to the propensity of the LLM to hallucinate when prompted with simple questions, break guardrails (aka., Jail Break,) and its stochastic nature, we proceed looking at prompt methods favoured by this author, to control responses and improve later opportunities for fact-checking LLM responses. In general, be as explicit as possible when crafting prompts. Further, outside of custom prompts (configuration feature of ChatGPT) and prompt templates (programming required,) which utilize the primitive prompts, this author’s choice of prompt methods to improve responses are few shot (examples,) chain of thought (CoT,) citation request, contextual information, instructions, output formats, and follow-up focussed prompts (facilitates fact checking,) &c. Note that Google Bard (from Gemini AI integration) permits the prompter to double-check responses, i.e., “fact check,” via a hyperlink at the bottom of responses. Microsoft Copilot generally includes citations in its responses.

Complex LLM reasoning through intermediary steps is known as chain-of-thought (CoT,) prompting. Firstly, it could be initiated directly by issuing an overt instruction to proceed “step by step” or to be explicit in arriving at an answer/response. Alternatively, it could be achieved by in-context learning (ICL) where a diversity of solved handmade demonstration/example prompts, aka few shot prompts, indicate the steps taken to arrive at an answer/response in at least one sample answer. The LLM responds in like manner proceeding from question to answer in a systematic or stepwise manner to an answer.  

Advantage of Chain of Thought Prompting With OpenAI

 

Zero Shot

Few Shot

Zero-Shot CoT

Few Shot CoT

User  Prompt

Q: If four eagles are soaring on a thermal, on a sunny day in the skies of Trinidad and Tobago, and they are joined by 2 Long-winged Harrier (Circus buffoni), then half of the eagles peel off towards a mountain peak how many raptors are left soaring?

A:

Q: If three Copper-rumped hummingbirds visited the flowers on an Ixora plant, in my garden, during the early morning and, later, at noon two Ruby-topaz hummingbirds visited the Ixora flowers at noon, and much later, in the late afternoon, the same three Copper-rumped hummingbirds that visited in the early morning visit the Ixora flowers again, how many hummingbirds visited the Ixora flowers during the day?

A: The answer is five.

Q: If four eagles are soaring on a thermal, on a sunny day in the skies of Trinidad and Tobago, and they are joined by 2 Long-winged Harrier (Circus buffoni), then half of the eagles peel off towards a mountain peak how many raptors are left soaring?

A:

Q: If four eagles are soaring on a thermal, on a sunny day in the skies of Trinidad and Tobago, and they are joined by 2 Long-winged Harrier (Circus buffoni), then half of the eagles peel off towards a distant point how many raptors are left soaring?

A: Let’s proceed step by step.

Q: If three Copper-rumped hummingbirds visited the flowers on an Ixora plant, in my garden, during the early morning and, later, at noon two Ruby-topaz hummingbirds visited the Ixora flowers at noon, and much later, in the late afternoon, the same three Copper-rumped hummingbirds that visited in the early morning visit the Ixora flowers again, how many hummingbirds visited the Ixora flowers during the day?

A: The Ixora flowers were visited in the early morning by three unique hummingbirds and later, at noon, by two more unique hummingbirds, which totals five unique hummingbirds: 3 + 2 = 5. Since the same three hummingbirds revisited in the afternoon they do not count as unique and their afternoon contribution is 0. So the total number of hummingbirds that visited during the day is 5: (3 + 2 +0)

Q: If four eagles are soaring on a thermal, on a sunny day in the skies of Trinidad and Tobago, and they are joined by 2 Long-winged Harrier (Circus buffoni), then half of the eagles peel off towards a mountain peak how many raptors are left soaring?

A:

Chatbot

Answer

Answer X

Answer 

Answer

Other Methods to Improve Prompt Responses With OpenAI

 

Contextual Information/ Instructions

Output Formats

Citation Request, Instructions, CoT

CoT, Format, Instructions, Citation, Follow-up Focussed Prompts

User Prompt

Q: Long after the Big Bang, describe in bullet points, how our solar system was formed. Compare or contrast your response with the 19th century view on its formation.

A:

Q: Long after the Big Bang describe in 100 words, in essay style, how our solar system was formed.

A:

Q: Long after the Big Bang, describe step by step, in two hundred words, how our solar system was formed. Do not make anything up. Cite five reputable sources.

A:

Q: Long after the Big Bang, describe step by step, in two hundred words, how our solar system was formed. Do not make anything up. Cite five reputable sources.

A: https://chat.openai.com/share/ ffac489f-f1e2-473f-8bb1-8994ff40e402

Q: What are the credentials of “Encyclopedia of Astrobiology” and “ScienceDirect,” in 50 words.

Describe a molecular cloud, the flattening of a protostellar disk in 100 words.?

Chatbot

Answer

Answer

Answer

Answer

Effective prompts can limit/restrict poor returns from the model. Ideally, prompts should request the model to adopt a persona; be as explicit as possible; include contextual detail; use delimiters to separate parts of the query; provide examples; give instructions for completing a response (e.g., format, citations, &c.); and specify the verbosity of the response. For more guidelines on constructing prompts review the Prompt Engineering sub-section of the Further Resources below. 

Further Resources

Prompt Engineering Guide: https://www.promptingguide.ai/

10 Essential Prompt Engineering Methods For Successful ChatGPT & LLM Applications: https://www.topbots.com/prompt-engineering-chatgpt-llm-applications/

How to tune LLM Parameters for optimal performance: https://datasciencedojo.com/blog/llm-parameters/#

configuration_hyperparameters (LLM  Settings):  https://learnprompting.org/docs/basics/configuration_hyperparameters

Prompt Engineering 101 – Crash Course & Tips [13:59 Min]: https://youtu.be/aOm75o2Z5-o

ChatGPT Prompt Engineering: Zero-Shot, Few-Shot, and Chain of Thoughts: https://youtu.be/8FNyLapsvtQ

ChatGPT Update: Custom Instructions in ChatGPT! (Full Guide) [13:18 min]: https://youtu.be/TbbA44Jaric

Prompt Injections – An Introduction [14:55 min]: https://youtu.be/Fz4un08Ehe8

I Discovered The Ultimate ChatGPT Prompt Formula (Custom Instructions Explained) [8:32 min]: https://youtu.be/9N3sqfiHcjw

 

How ChatGPT is Trained and How Long it Took: https://www.griproom.com/fun/how-chatgpt-is-trained-and-how-long-it-took

ChatGPT Statistics (2023) — The Key Facts and Figures: https://www.stylefactoryproductions.com/blog/chatgpt-statistics#:~:text=However%2C%20OpenAI%20reportedly%20used%201%2C023,as%20little%20as%2034%20days

Pretraining vs Fine-tuning vs In-context Learning of LLM (GPT-x) EXPLAINED | Ultimate Guide ($) [9:10 min]: https://youtu.be/_FYwnO_g-4E

Fine-tuning Llama 2 on Your Own Dataset | Train an LLM for Your Use Case with QLoRA on a Single GPU [18:27 min]: https://youtu.be/MDA3LUKNl1E

[1hr Talk] Intro to Large Language Models, Andrej Karpathy [59:47 min]: https://youtu.be/zjkBMFhNj_g

Teaching ChatGPT: Retraining vs Knowledge Databases #chatgpt #learnai [1 min]: https://www.youtube.com/shorts/-1ONcJYgjNY

How ChatGPT Works Technically | ChatGPT Architecture [7:53 min]: https://youtu.be/bSvTVREwSNw

In-context Learning – A New Paradigm in NLP? [10:07 min]: https://youtu.be/n4HR4_j_xS8

 *LLMs consider large swaths of text to better understand their context

The Power of Natural Language Processing by Ross Gruetzemacher: https://hbr.org/2022/04/the-power-of-natural-language-processing

What is natural language processing (NLP)? https://www.ibm.com/topics/natural-language-processing#:~:text=Natural%20language%20processing%20(NLP)%20refers,same%20way%20human%20beings%20can.

Stanford XCS224U: Natural Language Understanding I In-context Learning, Pt 1: Origins I Spring 2023 [8:21 min]: https://youtu.be/eyNLkiQ89KI

Stanford XCS224U: NLU I In-context Learning, Part 2: Core Concepts I Spring 2023 [14:47 min]: https://youtu.be/7OOCV8XfMbo

Stanford XCS224U: NLU I In-context Learning, Part 3: Current Moment I Spring 2023 [8:29 min]: https://youtu.be/a9KQkvcuV3I

Natural Language Processing In 5 Minutes | What Is NLP And How Does It Work? | Simplilearn [5:28 min]: https://youtu.be/CMrHM8a3hqw

What is NLP (Natural Language Processing)? [9:37 min]: https://youtu.be/fLvJ8VdHLA0

*Algorithms that typically only look at the immediate context of words.

Autoregressive models for matrix-valued time series: https://www.sciencedirect.com/science/article/abs/pii/S0304407620302050

AutoregressiveModelsforMatrix-ValuedTimeSeries: https://arxiv.org/pdf/1812.08916.pdf

Autoregressive Models for Natural Language Processing: https://medium.com/@zaiinn440/autoregressive-models-for-natural-language-processing-b95e5f933e1f

Meta’s Yann LeCun on auto-regressive Large Language Models (LLMs): https://futurist.com/2023/02/13/metas-yann-lecun-thoughts-large-language-models-llms/

What is autoregression? [1:39 min]: https://youtu.be/IHDRE7RDs84

Vector Auto Regression : Time Series Talk [7:38 min]: https://youtu.be/UQQHSbeIaB0

Transformers, explained: Understand the model behind GPT, BERT, and T5 [9:10 min]: https://youtu.be/SZorAJ4I-sA

Transformer Neural Networks, ChatGPT’s foundation, Clearly Explained!!! [36:14 min]: https://youtu.be/zxQyTK8quyY

What are Transformers (Machine Learning Model)? [5:50 min]: https://youtu.be/ZXiruGOCn9s

What is GPT-3 (Generative Pre-Trained Transformer)? [3:40 min]: https://youtu.be/p3_OUX6nAXk

Otmar Hilliges: Deep Autoregressive Generative Modelling [1:22:07]: https://youtu.be/-aZTi_cq8u0

 

Chatbox vs Chatbot: Which One Is The Correct One? https://thecontentauthority.com/blog/chatbox-vs-chatbot

Chatbot Vs Chatbox: https://chat360.io/blog/chatbot-vs-chatbox/

Difference between an ai chatbot & a regular chatbot | daveai [1: 31 min]<span style=;”>: https://youtu.be/RnMo6CsegCs

Generative vs rules-based chatbots [7: 24 min]: https://youtu.be/lZjUS_8btEo

How To Build Your Own Custom ChatGPT Bot: https://gizmodo.com/how-to-build-custom-chatgpt-bot-openai-1851088361

Botpress: https://botpress.com/

Botpress: https://youtu.be/nzlni00lOSQ

Botpress: https://www.youtube.com/@Botpress

Chatbox: Your ULTIMATE AI Copilot With GPT-4/3.5 Technology: https://youtu.be/NSSnO-4IF5U

Chatbox Download: https://chatboxai.app/install?download=win64

Building a Client’s AI Chatbot in 10 Minutes || My AAA Journey: https://youtu.be/REayvO10mao

Stammerai: https://youtu.be/R2q0-XnpKsQ

Aligning language models to follow instructions: https://openai.com/research/instruction-following

GPT-4 Alignment – What means AI is aligned? https://youtu.be/yUu7EfxGk08

ChatGPT Simulators: Alignment in Large Language Models [23:17min]: https://youtu.be/8jIM2Oezb44

Don’t Trust AI? NVIDIA Guardrails May Lower Your Anxiety, And Save Your Job: https://www.forbes.com/sites/karlfreund/2023/04/25/dont-trust-ai–nvida-guardrails-may-lower-your-anxiety-and-save-your-job/?sh=bd946066a9f2

Safeguarding LLMs with Guardrails: https://towardsdatascience.com/safeguarding-llms-with-guardrails-4f5d9f57cff2

Evolving AI Governance for an LLM World // Diego Oppenheimer // LLMs in Production Conference Part 2 [14:46 min]: https://youtu.be/C15RxW_mtoI

Security Researchers: ChatGPT Vulnerability Allows Training Data to be Accessed by Telling Chatbot to Endlessly Repeat a Word: https://www.cpomagazine.com/cyber-security/security-researchers-chatgpt-vulnerability-allows-training-data-to-be-accessed-by-telling-chatbot-to-endlessly-repeat-a-word/

Risks of Large Language Models (LLM) [8:25 min]: https://youtu.be/r4kButlDLUc

Why Large Language Models Hallucinate [9:37 min]: https://youtu.be/cfqtFvWOfg0

Scalable Extraction of Training Data from (Production) Language Models: https://arxiv.org/pdf/2311.17035.pdf

ChatGPT “DAN” (and other “Jailbreaks”): https://github.com/0xk1h0/ChatGPT_DAN?ref=blog.seclify.com

What is Jailbreaking in AI models like ChatGPT? https://www.techopedia.com/what-is-jailbreaking-in-ai-models-like-chatgpt

Jailbreak Chat: https://www.jailbreakchat.com/

Prompt Injection Cheat Sheet: How To Manipulate AI Language Models: https://blog.seclify.com/prompt-injection-cheat-sheet/

What is Jailbreaking in AI models like ChatGPT? https://www.techopedia.com/what-is-jailbreaking-in-ai-models-like-chatgpt

Jailbroken AI Chatbots Can Jailbreak Other Chatbots: https://www.scientificamerican.com/article/jailbroken-ai-chatbots-can-jailbreak-other-chatbots/

HARDtalk (Mustafa Suleyman – CEO of Inflection AI): https://youtu.be/PvTy52JqnE4

The Real Reason to be Afraid of Artificial Intelligence | Peter Haas | TEDxDirigo [12:37 min]: https://www.youtube.com/watch?v=TRzBk_KuIaM

Why AI will never replace humans | Alexandr Wang | TEDxBerkeley [13:39 min]: https://youtu.be/iXCmoQDEoe4

ChatGPT: Is it possible to detect AI-generated text? [2:29 min]: https://youtu.be/F6lNcfluMfc

Detecting AI-Generated Text [3:54 min]: https://youtu.be/UPkE7sLShR8s

What is web spam and how does Google fight it? [2:56 min]: https://youtu.be/oJixNEmrwFU

Google Can Detect AI-Generated Content – Here’s Why It’s Dangerous [22:03 min]: https://youtu.be/194AMlfMkMo

AI Generated Content Can Not be Used For SEO: https://www.youtube.com/shorts/-PVoeKyZ310?feature=share

Can google search detect artificially generated text Generated by Copilot from Microsoft https://sl.bing.net/dfXCCgZOEIC

Sarah Silverman sues OpenAI, Meta over copyright infringement [2:45 min]: https://youtu.be/cpjAydTpNlE

Sports Illustrated under fire for AI-generated content [6:26 min]: https://youtu.be/oRNcGY-vnHc

The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work: https://www.nytimes.com/2023/12/27/business/media/new-york-times-ophen-ai-microsoft-lawsuit.html

Accordion ContentWIPO Copyright Treaty [2:12 min]: https://youtu.be/o963fLDTn50

Introduction to IP: Crash Course Intellectual Property #1 [10:09 min]: https://youtu.be/RQOJgEA5e1k

The Database Right [45:17 min]: https://youtu.be/uI_zl6P9HAA

Understanding the DMCA: An Overview: https://youtu.be/VrLSWqaFTKE

GDPR explained: How the new data protection act could change your life [5:39 min]: https://youtu.be/acijNEErf-c

EU Artificial Intelligence Act: https://artificialintelligenceact.eu/the-act/

The first law on AI regulation | The EU AI Act [14:36 min]: https://youtu.be/JOKXONV7LuA?t=114

The EU’s AI Act: A guide to understanding the ambitious plans to regulate artificial intelligence [2:40 min]: https://youtu.be/uhavY9So23k

Center for AI Safety (CAIS): https://www.safe.ai/about

Stanford Center for AI Safety: https://aisafety.stanford.edu/

UC Berkeley Researchers Introduce Starling-7B: An Open Large Language Model (LLM) Trained by Reinforcement Learning from AI Feedback (RLAIF): https://www.marktechpost.com/2023/12/04/uc-berkeley-researchers-introduce-starling-7b-an-open-large-language-model-llm-trained-by-reinforcement-learning-from-ai-feedback-rlaif/

Introducing Microsoft Copilot Studio | Your Copilot, Your Way [1 min]: https://youtu.be/WVn57PXoFPE

Introducing Microsoft Copilot Studio: Build Your Own Copilot with No Code [9:31 min]: https://youtu.be/86JThtKNC9M

Microsoft Loves SLM (Small Language Models) – Phi-2 / Ocra 2 [8:42 min]: https://youtu.be/x3V7KnjvdM0

Small Language Models Are Also Few-Shot Learners [20:50 min]: https://youtu.be/UrGZCPalfoE

Google’s new AI Model Gemini now available in Bard, here is how to use: https://dilipkashyap15.medium.com/googles-new-ai-model-gemini-now-available-in-bard-here-is-how-to-use-259386d6bd68#:~:text=Visit%20Bard’s%20Website%3A%20Navigate%20to,interactive%20and%20refined%20chat%20experience.

How to use google Gemini (ALL YOU NEED TO KNOW) [2:30]: https://youtu.be/ZE1lnQdBB5o

Announcing Grok: https://x.ai/

About XAI: https://x.ai/about/

NY Times sues OpenAI, Microsoft for infringing copyrighted works: https://www.reuters.com/legal/transactional/ny-times-sues-openai-microsoft-infringing-copyrighted-work-2023-12-27/

 

–Richard Thomas

Previous, Part IV/V

Leave a Reply

Your email address will not be published. Required fields are marked *

UPCOMING TRAINING

SHARE TO SOCIAL MEDIA