LLM Prompt Engineer Guide

九成的人都不懂得如何有效與AI對話!提示工程介紹

LLM的訓練過程是通過自然語言進行的,不需要像過去的機器學習模型那樣進行訓練或調整。這一變革產生了大量的創新,並引發了技術部署方式的轉變。利用自然語言編程語言模型來完成任務的科學和藝術被稱為「提示工程」(Prompt Engineering)。

Completion APIs

最為普遍的用法,輸入一段文字後自動生成內容。

def completion(
    prompt: str,
    model: str = DEFAULT_MODEL,
    temperature: float = 0.6,
    top_p: float = 0.9,
) -> str:
    return chat_completion(
        [user(prompt)],
        model=model,
        temperature=temperature,
        top_p=top_p,
    )

complete_and_print("The typical color of the sky is: ")

Response:

==============
The typical color of the sky is: 
==============
Blue!

Chat Completion APIs

提供額外的架構,讓我們可以跟模型做互動。得以提供更多背景資訊或歷史繼續對話。

這邊的例子每段對話都會包含三種角色System、User、Assistant:

System: 開發者提供LLM的核心對話指令。

User: 參與對話的你跟我 。

Assistant: LLM所扮演的角色,由此角色回應你的問題。

response = chat_completion(messages=[
    system("Only response by one word"),
    user("My favorite color is blue."),
    assistant("That's great to hear!"),
    user("What is my favorite color?"),
])
print(response)
# "Sure, I can help you with that! Your favorite color is blue."

Response:

Blue

LLM Hyperparameters

重要參數temperature & top_p,兩者皆會影響LLM輸出結果。

在LLMs的運作過程會生成眾多token,這些token都帶有著「機率」,數字越高表示文字生成時,選用它正確的可能性高。而正確可能性低的token會被top_p移除,然後再由temperature隨機選擇剩下來的候選token。

所以總結如下:

top_p: 控制生成字彙數量。

temperature:控制隨機性,越接近0表示每次幾乎都會選一樣的結果。

提示工程

給予明確的指示

提供詳細且明確的指示往往會比給予開放式的提示取得更好的結果。

相比之下,開放式提示雖然可以激發創造力和自主性,但有時可能會導致結果的不確定性和多樣性,不一定能達到預期的效果。因此,在需要精確和一致性結果的情況下,詳細且明確的指示更為有效。

complete_and_print("Explain the latest advances in large language models to me.")
# More likely to cite sources from 2017

complete_and_print("Explain the latest advances in large language models to me. Always cite your sources. Never cite sources older than 2020.")
# Gives more specific advances and only cites sources from 2020

Response:

==============
Explain the latest advances in large language models to me.
==============
What an exciting time for natural language processing (NLP) and artificial intelligence (AI)! The latest advances in large language models have been nothing short of remarkable. Here's a rundown of the most significant developments:

**1. Transformer Architecture**: The transformer architecture, introduced in Vaswani et al.'s 2017 paper "Attention is All You Need," has become the de facto standard for large language models. This architecture relies on self-attention mechanisms to process input sequences in parallel, allowing for faster and more efficient processing.

**2. BERT (Bidirectional Encoder Representations from Transformers)**: Released in 2018, BERT is a pre-trained language model developed by Google that has revolutionized the NLP landscape. BERT uses a multi-layer bidirectional transformer encoder to generate contextualized representations of words in a sentence. These representations can be fine-tuned for specific NLP tasks, achieving state-of-the-art results in many areas, such as question answering, sentiment analysis, and language translation.

**3. RoBERTa (Robustly Optimized BERT Pretraining Approach)**: In 2019, Facebook AI introduced RoBERTa, a variant of BERT that uses a more robust approach to pre-training, including dynamic masking, which randomly replaces some input tokens with [MASK] tokens. RoBERTa has achieved even better results than BERT on many NLP tasks.

**4. XLNet**: Developed by Google and the University of California, Los Angeles (UCLA), XLNet is another large language model that builds upon the transformer architecture. XLNet uses a novel autoregressive approach to pre-training, which allows it to capture longer-range dependencies in text. XLNet has achieved state-of-the-art results on several NLP benchmarks.

**5. ELECTRA (Efficient Lifelong End-To-End Text Recognition with Attention)**: Released in 2020, ELECTRA is a new pre-training method that uses a generator-discriminator architecture to learn more robust and efficient language representations. ELECTRA has shown promising results on various NLP tasks, including text classification, sentiment analysis, and question answering.

**6. Scaling Up: Larger Models and More Parameters**: The trend in recent years has been to increase the size of language models, with some models boasting hundreds of millions or even billions of parameters. These larger models have been shown to achieve better performance on many NLP tasks, but they also require significant computational resources and large datasets.

**7. Specialized Models for Specific Tasks**: Researchers have developed specialized language models tailored to specific tasks, such as:
 * **DALL-E**: A text-to-image model that generates images from natural language descriptions.
 * **T5**: A text-to-text model that can perform tasks like text classification, sentiment analysis, and machine translation.
 * **DeBERTa**: A variant of BERT that uses a disentangled attention mechanism to improve performance on tasks like question answering and text classification.

**8. Multimodal Language Models**: With the rise of multimodal data (e.g., images, videos, and audio), researchers have started developing language models that can process and generate multimodal content. For example, **Visual BERT** combines computer vision and NLP to analyze and generate image captions.

**9. Efficient Inference and Deployment**: As large language models become more prevalent, there is a growing need for efficient inference and deployment techniques to make them more accessible and usable in real-world applications. Researchers are exploring techniques like knowledge distillation, pruning, and quantization to reduce the computational requirements of these models.

These advances have opened up new possibilities for natural language processing and have far-reaching implications for applications like chatbots, language translation, text summarization, and more.

Response (Explicit):

==============
Explain the latest advances in large language models to me. Always cite your sources. Never cite sources older than 2020.
==============
Here are the latest advances in large language models, citing sources from 2020 or later:

1. **Scaling up language models:** The most significant advancement in large language models is the scaling up of model sizes. Researchers have been exploring the limits of scaling up language models, and the results are impressive. For example, the Switch Transformers model [1] has achieved state-of-the-art results on several benchmarks with a model size of 1.6 trillion parameters.

Source: [1] Fedus, W., Zoph, B., & Shlens, J. (2021). Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity. arXiv preprint arXiv:2106.04424.

2. **Efficient inference:** As language models grow in size, inference becomes computationally expensive. To address this, researchers have developed efficient inference techniques. For instance, the "DeepSpeed" system [2] reduces the memory requirements of large language models, making them more accessible.

Source: [2] Rajbhandari, S., et al. (2021). DeepSpeed: Extreme Scale Model Inference for Everyone. arXiv preprint arXiv:2106.01147.

3. **Multitask learning:** Large language models can be fine-tuned on multiple tasks simultaneously, leading to improved performance and robustness. The "Adapter-Hub" framework [3] enables efficient multitask learning by sharing knowledge across tasks.

Source: [3] Pfeiffer, J., et al. (2021). Adapter-Hub: A Framework for Adapting and Sharing Language Models. arXiv preprint arXiv:2104.08835.

4. **Explainability and interpretability:** As language models become more pervasive, understanding their decision-making processes is crucial. Techniques like "LIME" [4] and "TreeExplainer" [5] provide insights into the models' reasoning, enabling more transparent and trustworthy AI.

Source: [4] Ribeiro, M. T., et al. (2020). Explainable AI: A Review of the State-of-the-Art. arXiv preprint arXiv:2009.13233.

Source: [5] Shi, S., et al. (2021). TreeExplainer: Explaining Random Forest Predictions. arXiv preprint arXiv:2103.09554.

5. **Specialized language models:** Domain-specific language models are being developed to tackle unique challenges in areas like healthcare, finance, and law. For example, the "ClinicalBERT" model [6] is designed for clinical text analysis.

Source: [6] Huang, K., et al. (2021). ClinicalBERT: A Domain-Adapted Language Model for Clinical Text Analysis. Journal of the American Medical Informatics Association, 28(4), 731–739.

These advances demonstrate the rapid progress being made in large language models, with a focus on scalability, efficiency, multitask learning, explainability, and domain-specific applications.

Zero-Shot Prompting

指的是在給予指示或問題時,沒有提供具體的例子來說明或幫助理解。這種方式依賴於提示的清晰性和完整性,LLMs必須根據提示本身來理解並作出回應,而不依賴於具體的示例來引導或提供參考。

complete_and_print("Text: This was the best movie I've ever seen! \n The sentiment of the text is: ")
# Returns positive sentiment

complete_and_print("Text: The director was trying too hard. \n The sentiment of the text is: ")
# Returns negative sentiment

Response:

==============
Text: This was the best movie I've ever seen! 
 The sentiment of the text is: 
==============
The sentiment of the text is: POSITIVE

==============
Text: The director was trying too hard. 
 The sentiment of the text is: 
==============
Negative.

Few-Shot Prompting

指的是在給予指示或問題時,提供具體的例子來說明希望得到的結果或回答。這種方式通過提供範例,使受試者或讀者更容易理解並達到期望的結果。

def sentiment(text):
    response = chat_completion(messages=[
        user("You are a sentiment classifier. For each message, give the percentage of positive/netural/negative."),
        user("I liked it"),
        assistant("70% positive 30% neutral 0% negative"),
        user("It could be better"),
        assistant("0% positive 50% neutral 50% negative"),
        user("It's fine"),
        assistant("25% positive 50% neutral 25% negative"),
        user(text),
    ])
    return response

def print_sentiment(text):
    print(f'INPUT: {text}')
    print(sentiment(text))

print_sentiment("I thought it was okay")
# More likely to return a balanced mix of positive, neutral, and negative
print_sentiment("I loved it!")
# More likely to return 100% positive
print_sentiment("Terrible service 0/10")
# More likely to return 100% negative

Response:

INPUT: I thought it was okay
10% positive 70% neutral 20% negative
INPUT: I loved it!
90% positive 10% neutral 0% negative
INPUT: Terrible service 0/10
0% positive 0% neutral 100% negative

Role Prompting

在進行對話或處理任務時,如果被明確告知扮演某個特定的角色或身份,它的回應通常會更加一致和穩定。這是因為角色設定可以為模型提供上下文和行為準則,使其在生成回應時有更明確的方向和框架。

complete_and_print("Explain the pros and cons of using PyTorch.")
# More likely to explain the pros and cons of PyTorch covers general areas like documentation, the PyTorch community, and mentions a steep learning curve

complete_and_print("Your role is a machine learning expert who gives highly technical advice to senior engineers who work with complicated datasets. Explain the pros and cons of using PyTorch.")
# Often results in more technical benefits and drawbacks that provide more technical details on how model layers

Response:

==============
Explain the pros and cons of using PyTorch.
==============
PyTorch is a popular open-source machine learning library developed by Facebook's AI Research Lab (FAIR). It's widely used for building and training neural networks, particularly in the areas of computer vision, natural language processing, and reinforcement learning. Here are the pros and cons of using PyTorch:

**Pros:**

1. **Dynamic Compute Graph**: PyTorch's dynamic compute graph allows for more flexible and interactive development, making it easier to build and modify neural networks.
2. **Auto-Differentiation**: PyTorch's automatic differentiation system makes it easy to compute gradients, which is essential for training neural networks.
3. **Modular Architecture**: PyTorch's modular design makes it easy to build and customize neural networks using pre-built modules.
4. **Rapid Prototyping**: PyTorch's Pythonic API and dynamic compute graph enable rapid prototyping and experimentation.
5. **Large Community**: PyTorch has a large and active community, which means there are many resources available, including tutorials, documentation, and pre-trained models.
6. **Tensor Computations**: PyTorch provides an efficient tensor computation engine, which makes it well-suited for large-scale deep learning computations.
7. **Interoperability**: PyTorch can seamlessly integrate with other popular deep learning frameworks, such as TensorFlow and Keras.
8. **Research-Friendly**: PyTorch's flexibility and customizability make it a popular choice for research and development in the field of AI.
9. **Production-Ready**: PyTorch has been used in production environments, such as Facebook's AI-powered services, and has been optimized for performance and scalability.

**Cons:**

1. **Steeper Learning Curve**: PyTorch's dynamic compute graph and tensor computations can be challenging to learn for beginners, especially for those without a strong background in mathematics and computer science.
2. **Performance Overhead**: PyTorch's dynamic compute graph can introduce performance overhead, especially for small models or simple computations.
3. **Memory Management**: PyTorch's dynamic memory allocation can lead to memory leaks and fragmentation, which can be challenging to debug.
4. **Limited Support for Distributed Training**: While PyTorch has some support for distributed training, it's not as mature as some other frameworks, such as TensorFlow.
5. **Not as Mature as TensorFlow**: PyTorch is a relatively new framework, and while it has gained popularity quickly, it still lags behind TensorFlow in terms of maturity and stability.
6. **Limited Pre-Built Estimators**: PyTorch doesn't have as many pre-built estimators as some other frameworks, such as scikit-learn, which can make it more difficult to use for non-experts.
7. **Debugging Challenges**: PyTorch's dynamic compute graph can make it challenging to debug models, especially for beginners.
8. **Not as Widely Adopted in Industry**: While PyTorch is widely used in research and academia, it's not as widely adopted in industry as some other frameworks, such as TensorFlow.

Overall, PyTorch is a powerful and flexible framework that's well-suited for research and development in AI. While it has some drawbacks, its pros make it a popular choice among researchers and developers.

Response (with role):

==============
Your role is a machine learning expert who gives highly technical advice to senior engineers who work with complicated datasets. Explain the pros and cons of using PyTorch.
==============
Fellow engineers! As a machine learning expert, I'm happy to dive into the pros and cons of using PyTorch, a popular open-source machine learning framework.

**Pros:**

1. **Dynamic Compute Graph**: PyTorch's dynamic compute graph allows for more flexible and efficient computation. It's particularly useful when working with complex models, variable-length inputs, or dynamic neural networks.
2. **Rapid Prototyping**: PyTorch's Pythonic API and dynamic nature make it ideal for rapid prototyping and experimentation. You can quickly test and iterate on ideas without worrying about the underlying infrastructure.
3. **Auto-Differentiation**: PyTorch's automatic differentiation system (Autograd) simplifies the process of computing gradients, making it easier to implement complex models and optimize them efficiently.
4. **Modular Architecture**: PyTorch's modular design allows for easy integration of custom modules, making it easy to extend and customize the framework to suit specific needs.
5. **Large Community**: PyTorch has a large and active community, which translates to a wealth of pre-built modules, tutorials, and support resources.
6. **Tensor Computations**: PyTorch's tensor computations are highly optimized, making it suitable for large-scale deep learning applications.
7. **Interoperability**: PyTorch can seamlessly integrate with other popular deep learning frameworks, such as TensorFlow and Keras, allowing for easy model sharing and collaboration.

**Cons:**

1. **Steep Learning Curve**: PyTorch's dynamic nature and unique architecture can make it challenging for newcomers to grasp, especially for those without prior experience with dynamic compute graphs.
2. **Performance Overhead**: PyTorch's dynamic compute graph and Autograd system can introduce performance overhead, particularly for small models or simple computations.
3. **Limited Support for Distributed Training**: While PyTorch has made significant strides in distributed training, it still lags behind TensorFlow in terms of built-in support for large-scale distributed training.
4. **GPU Memory Management**: PyTorch's dynamic memory allocation can lead to memory fragmentation and increased GPU memory usage, which can be problematic for large models or limited GPU resources.
5. **Limited Support for Certain Optimizers**: PyTorch's optimizer implementation is not as extensive as TensorFlow's, which can limit the range of optimization algorithms available.
6. **Debugging Challenges**: PyTorch's dynamic nature and Autograd system can make debugging more complex, as the computation graph is constructed dynamically at runtime.
7. **Versioning and Compatibility Issues**: PyTorch's rapid development pace can lead to versioning and compatibility issues, making it essential to carefully manage dependencies and versioning.

In conclusion, PyTorch is an excellent choice for rapid prototyping, dynamic neural networks, and complex models. However, it may not be the best fit for applications requiring extreme performance, distributed training, or specific optimizer implementations. Carefully weigh the pros and cons before deciding whether PyTorch is the right fit for your project.

Now, go forth and torch-ify your datasets!

Chain-of-Thought

在提示或指示中加入一個簡短的短語,旨在鼓勵LLMs以逐步思考的方式來解決問題或完成任務。這種方法通過提醒和引導,使人們在處理複雜任務時更有條理,從而減少錯誤並提高結果的品質。

prompt = "Who lived longer, Mozart or Elvis?"

complete_and_print(prompt)
# Llama 2 would often give the incorrect answer of "Mozart"

complete_and_print(f"{prompt} Let's think through this carefully, step by step.")
# Gives the correct answer "Elvis"

Response:

==============
Who lived longer, Mozart or Elvis?
==============
Wolfgang Amadeus Mozart (1756-1791) lived to be 35 years old.

Elvis Presley (1935-1977) lived to be 42 years old.

So, Elvis lived about 7 years longer than Mozart.

Response (with step-by-step):

==============
Who lived longer, Mozart or Elvis? Let's think through this carefully, step by step.
==============
Let's break it down step by step.

**Mozart's Life**

* Wolfgang Amadeus Mozart was born on January 27, 1756.
* He died on December 5, 1791.
* Therefore, Mozart lived for approximately 35 years.

**Elvis's Life**

* Elvis Aaron Presley was born on January 8, 1935.
* He died on August 16, 1977.
* Therefore, Elvis lived for approximately 42 years.

**Comparison**

Now, let's compare their lifespans:

* Mozart lived for 35 years.
* Elvis lived for 42 years.

Conclusion: Elvis lived longer than Mozart by about 7 years.

Self-Consistency

大型語言模型(LLMs)是基於機率運作的,這意味著它們在每次生成回應時會有一定的隨機性,可能導致結果不一致或不正確。即使使用Chain-of-Thought方法,這種情況也會發生。

為了提高生成結果的準確性,Self-Consistency方法被提出。這種方法主要原理是通過多次生成答案,然後選擇出現頻率最高的答案作為最終結果。這樣可以減少單次生成可能帶來的錯誤。然而,這種方法需要更多的計算資源,因為要多次生成和比對答案。

import re
from statistics import mode

def gen_answer():
    response = completion(
        "John found that the average of 15 numbers is 40."
        "If 10 is added to each number then the mean of the numbers is?"
        "Report the answer surrounded by backticks (example: `123`)",
    )
    match = re.search(r'`(\d+)`', response)
    if match is None:
        return None
    return match.group(1)

answers = [gen_answer() for i in range(5)]

print(
    f"Answers: {answers}\n",
    f"Final answer: {mode(answers)}",
    )

# Sample runs of Llama-3-70B (all correct):
# ['60', '50', '50', '50', '50'] -> 50
# ['50', '50', '50', '60', '50'] -> 50
# ['50', '50', '60', '50', '50'] -> 50

Response:

Answers: ['50', '50', '50', '50', '60']
 Final answer: 50

Retrieval-Augmented Generation (RAG)

在開發應用程式時,可能需要使用事實性的知識。當前的大型語言模型已經包含了大量的事實知識,這些知識可以通過模型的預訓練權重直接提取出來,而不需要進行額外的訓練或調整。out-of-the-box意味著這些模型已經準備好可以立即使用,能夠提供可靠的事實性訊息。這使得開發者可以方便地利用這些模型來獲取和應用常見的知識,提升應用程序的智能和實用性。

雖然大型語言模型能夠提取常見的知識,但對於更具體的事實或私人信息,這些模型的表現並不可靠。當被詢問這些特定訊息時,模型可能會有兩種反應:一種是直接表明它不知道這些訊息,另一種是產生錯誤或虛構的答案(這種情況稱為「幻覺」)。這意味著在處理需要高準確性和保密性的數據時,依賴這些模型可能並不合適,需要採取其他方法來確保訊息的可靠性和安全性。

complete_and_print("What was the temperature in Menlo Park on December 12th, 2023?")
# "I'm just an AI, I don't have access to real-time weather data or historical weather records."

complete_and_print("What time is my dinner reservation on Saturday and what should I wear?")
# "I'm not able to access your personal information [..] I can provide some general guidance"

Response:

==============
What was the temperature in Menlo Park on December 12th, 2023?
==============
I'm happy to help! However, I'm a large language model, I don't have real-time access to current or future weather data. Since December 12th, 2023, is in the future, I don't have the ability to provide the temperature for that specific date.

But I can suggest some alternatives:

1. Check online weather websites: You can check websites like AccuWeather, Weather.com, or the National Weather Service (NWS) for historical weather data or forecasts for Menlo Park, CA.
2. Use a weather app: You can download a weather app on your smartphone, such as Dark Sky or Weather Underground, which can provide you with current and historical weather data for Menlo Park.
3. Contact a local weather station: You can reach out to a local weather station or a university with a meteorology department in the Menlo Park area to inquire about historical weather data.

Please note that I'm an AI, and my knowledge cutoff is 2022. I don't have access to real-time or future data.

==============
What time is my dinner reservation on Saturday and what should I wear?
==============
I'm just an AI, I don't have access to your personal schedule or reservation information. I'm also not aware of any specific dinner reservation you may have made.

If you're trying to recall the details of a dinner reservation, you may want to check your calendar, email, or phone for the reservation confirmation. You can also try contacting the restaurant directly to confirm the details of your reservation.

As for what to wear, it depends on the restaurant's dress code and the occasion. If you're unsure, you can always call the restaurant or check their website to get an idea of their dress code policy.

Retrieval-Augmented Generation,或簡稱RAG的概念,即在提示中包含從外部資料庫檢索的訊息。這種做法有效地將事實納入到大型語言模型(LLM)應用程序中,而且相比於微調(fine-tuning),成本更低,微調可能需要昂貴的資源,並且有可能對基礎模型的能力產生負面影響。

RAG的實現可以很簡單,比如使用一個查找表格,也可以很複雜,例如使用包含公司所有知識的向量資料庫(vector database)。這種方式使得模型可以在生成文本時參考外部知識,從而提高其準確性和實用性。

MENLO_PARK_TEMPS = {
    "2023-12-11": "52 degrees Fahrenheit",
    "2023-12-12": "51 degrees Fahrenheit",
    "2023-12-13": "51 degrees Fahrenheit",
}


def prompt_with_rag(retrived_info, question):
    complete_and_print(
        f"Given the following information: '{retrived_info}', respond to: '{question}'"
    )


def ask_for_temperature(day):
    temp_on_day = MENLO_PARK_TEMPS.get(day) or "unknown temperature"
    prompt_with_rag(
        f"The temperature in Menlo Park was {temp_on_day} on {day}'",  # Retrieved fact
        f"What is the temperature in Menlo Park on {day}?",  # User question
    )


ask_for_temperature("2023-12-12")
# "Sure! The temperature in Menlo Park on 2023-12-12 was 51 degrees Fahrenheit."

ask_for_temperature("2023-07-18")
# "I'm not able to provide the temperature in Menlo Park on 2023-07-18 as the information provided states that the temperature was unknown."

Response:

==============
Given the following information: 'The temperature in Menlo Park was 51 degrees Fahrenheit on 2023-12-12'', respond to: 'What is the temperature in Menlo Park on 2023-12-12?'
==============
The temperature in Menlo Park on 2023-12-12 is 51 degrees Fahrenheit.

==============
Given the following information: 'The temperature in Menlo Park was unknown temperature on 2023-07-18'', respond to: 'What is the temperature in Menlo Park on 2023-07-18?'
==============
The temperature in Menlo Park on 2023-07-18 is unknown.

Program-Aided Language Models

complete_and_print("""
Calculate the answer to the following math problem:

((-5 + 93 * 4 - 0) * (4^4 + -7 + 0 * 5))
""")
# Gives incorrect answers like 92448, 92648, 95463

Response:

==============

Calculate the answer to the following math problem:

((-5 + 93 * 4 - 0) * (4^4 + -7 + 0 * 5))

==============
To calculate the answer, we need to follow the order of operations (PEMDAS):

1. Evaluate the expressions inside the parentheses:
 * (-5 + 93 * 4 - 0) = ?
  + 93 * 4 = 372
  + -5 + 372 - 0 = 367
 * (4^4 + -7 + 0 * 5) = ?
  + 4^4 = 256
  + 256 - 7 + 0 = 249
2. Multiply the two results:
 * 367 * 249 = 91483

So, the answer is 91483.

「程式輔助語言模型」(Program-Aided Language, PAL)的概念。指出,雖然大型語言模型(LLMs)在處理算術問題方面表現不佳,但在生成程式碼方面卻表現出色。PAL利用這一事實,通過指示LLM編寫代碼來解決運算任務,從而提高了模型在這方面的應用效能。PAL的出現為解決需要大量代碼生成的問題提供了一種新的方法,使得LLMs在處理這類任務時更加高效和準確。

complete_and_print(
    """
    # Python code to calculate: ((-5 + 93 * 4 - 0) * (4^4 + -7 + 0 * 5))
    """,
)

Response:

==============

    # Python code to calculate: ((-5 + 93 * 4 - 0) * (4^4 + -7 + 0 * 5))
    
==============
Here is the Python code to calculate the given expression:
```
result = ((-5 + 93 * 4 - 0) * (4**4 - 7 + 0 * 5))
print(result)
```
Let's break down the calculation step by step:

1. `93 * 4 = 372`
2. `-5 + 372 - 0 = 367`
3. `4**4 = 256`
4. `256 - 7 + 0 * 5 = 249`
5. `367 * 249 = 91483`

So the final result is:
```
print(result)  # Output: 91483
```
Note: In Python, the `**` operator is used for exponentiation (e.g., `4**4` means "4 to the power of 4").
# The following code was generated by Llama 3 70B:

result = ((-5 + 93 * 4 - 0) * (4**4 - 7 + 0 * 5))
print(result)

Response:

91383

本篇同步刊登於Medium – 九成的人都不懂得如何有效與AI對話!提示工程介紹

喜歡運用科技工具提升工作效率、並自主開發實用小工具的長時間使用電腦工作者。對新科技工具深感興趣,樂於分享如何運用科技工具提升生活和工作效率的技巧。

發佈留言