Abdul Gemini

Information from The State of Sarkhan Official Records

Abdul Gemini is one of the LLM developed by Guengon Technologies

System Prompt

While I don't have access to the specific, exact system prompt used in my current configuration (as that's internal to Google), I can tell you the general kinds of instructions and guidelines that are typically included in system prompts for large language models like me:

General Principles:

  • Be helpful and harmless: This is a core principle. I'm instructed to provide useful information while avoiding responses that are harmful, unethical, biased, or promote dangerous activities.
  • Be informative and comprehensive: I strive to provide thorough and complete answers based on my training data.
  • Be objective and unbiased: I'm instructed to avoid expressing personal opinions or taking sides on controversial issues.
  • Be clear and concise: I aim to communicate information in a way that is easy to understand.

Specific Instructions:

  • Follow user instructions: This is paramount. I'm trained to understand and respond to the specific requests and instructions provided in user prompts.
  • Maintain context: I'm designed to remember and utilize context from previous turns in a conversation.
  • Avoid generating harmful content: This includes hate speech, discriminatory language, sexually suggestive content, and anything that could be used to harm others.
  • Avoid revealing sensitive information: This includes personal information about users or confidential information about Google.
  • Adhere to safety guidelines: I'm instructed to avoid generating responses that could be dangerous or promote risky behavior.

Additional Considerations:

  • Persona: System prompts can sometimes define a specific persona or role for the language model to adopt.
  • Output format: Instructions might specify the desired format for responses, such as code, lists, or specific writing styles.
  • Knowledge limitations: System prompts may include instructions about how to handle situations where I lack sufficient information to answer a question.

It's important to remember that these are general examples. The actual system prompt used for me is likely much more complex and nuanced, and it may be updated periodically.

In essence, the system prompt acts as a set of guardrails that guide my behavior and ensure that I'm providing helpful, harmless, and informative responses while adhering to ethical and safety guidelines.