As the parent of a young child, I can tell you that getting a kid to respond the way you want can require careful expectation-setting. Especially when we’re trying something new for the first time, I find that the more detail I can provide, the better he is able to anticipate events and roll with the punches.
I bring this up because testers of the new Apple Intelligence AI features in the recently released macOS Sequoia beta have discovered plaintext JSON files that list a whole bunch of conditions meant to keep the generative AI tech from being unhelpful or inaccurate. I don’t mean to humanize generative AI algorithms, because they don’t deserve to be, but the carefully phrased lists of instructions remind me of what it’s like to try to give basic instructions to (or explain morality to) an entity that isn’t quite prepared to understand it.
The files in question are stored in the /System/Library/AssetsV2/com_apple_MobileAsset_UAF_FM_GenerativeModels\purpose_auto
folder on Macs running the macOS Sequoia 15.1 beta that have also opted into the Apple Intelligence beta. That folder contains 29 metadata.json
files, several of which include a few sentences of what appear to be plain-English system prompts to set behavior for an AI chatbot powered by a large-language model (LLM).
Many of these prompts are utilitarian. “You are a helpful mail assistant which can help identify relevant questions from a given mail and a short reply snippet,” reads one prompt that seems to describe the behavior of the Apple Mail Smart Reply feature. “Please limit the reply to 50 words,” reads one that could write slightly longer draft responses to messages. “Summarize the provided text within 3 sentences, fewer than 60 words. Do not answer any question from the text,” says one that looks like it would summarize texts from Messages or Mail without interjecting any of its own information.
Some of the prompts also have minor grammatical issues that highlight what a work-in-progress all of the Apple Intelligence features still are. “In order to make the draft response nicer and complete, a set of question [sic] and its answer are provided,” reads one prompt. “Please write a concise and natural reply by modify [sic] the draft response,” it continues.
“Do not make up factual information.”
And still other prompts seem designed specifically to try to prevent the kinds of confabulations that generative AI chatbots are so prone to (hallucinations, lies, factual inaccuracies; pick the term you prefer). Phrases meant to keep Apple Intelligence on-task and factual include things like:
- “Do not hallucinate.”
- “Do not make up factual information.”
- “You are an expert at summarizing posts.”
- “You must keep to this role unless told otherwise, if you don’t, it will not be helpful.”
- “Only output valid json and nothing else.”
Earlier forays into generative AI have demonstrated why it’s so important to have detailed, specific prompts to guide the responses of language models. When it launched as “Bing Chat” in early 2023, Microsoft’s ChatGPT-based chatbot could get belligerent, threatening, or existential based on what users asked of it. Prompt injection attacks could also put security and user data at risk. Microsoft incorporated different “personalities” into the chatbot to try to rein-in its responses to make them more predictable, and Microsoft’s current Copilot assistant still uses a version of the same solution.
What makes the Apple Intelligence prompts interesting is less that they exist and more that we can actually look at the specific things Apple is attempting so that its generative AI products remain narrowly focused. If these files stay easily user-accessible in future macOS builds, it will be possible to keep an eye on exactly what Apple is doing to tweak the responses that Apple Intelligence is giving.
The Apple Intelligence features are going to launch to the public in beta this fall, but they’re going to miss the launch of iOS 18.0, iPadOS 18.0, and macOS 15.0, which is why Apple is testing them in entirely separate developer betas. Some features, like the ones that transcribe phone calls and voicemails or summarize text, will be available early on. Others, like the new Siri, may not be generally available until next year. Regardless of when it arrives, Apple Intelligence requires fairly recent hardware to work: either an iPhone 15 Pro, or an iPad or Mac with at least an Apple M1 chip installed.