Show HN: SwiftAI – open-source library to easily build LLM features on iOS/macOS

github.com

51 points by mi12-root 11 hours ago

We built SwiftAI, an open-source Swift library that lets you use Apple’s on-device LLMs when available (Apple opened access in June), and fall back to a cloud model when they aren’t available — all without duplicating code.

SwiftAI gives you: - A single, model-agnostic API - An agent/tool loop - Strongly-typed structured outputs - Optional chat state

Backstory: We started experimenting with Apple’s local models because they’re free (no API calls), private, and work offline. The problem: not all devices support them (older iPhones, Apple Intelligence disabled, low battery, etc.). That meant writing two codepaths — one for local, one for cloud — and scattering branching logic across the app. SwiftAI centralizes that decision. Your feature code stays the same whether you’re on-device or cloud.

Example

  import SwiftAI
   
  let llm: any LLM = SystemLLM.ifAvailable ?? OpenaiLLM(model: "gpt-5-mini", apiKey: "<key>")

  let response = try await llm.reply(to: "Write a haiku about Hacker News")
  print(response.content)
It's open source — we'd love for you to try it, break it, and help shape the roadmap. Join our discord / slack or email us at root@mit12.dev.

Links

- GitHub (source, docs): https://github.com/mi12labs/SwiftAI

- System Design: https://github.com/mi12labs/SwiftAI/blob/main/Docs/Proposals...

- Swift Package Index (compat/builds): https://swiftpackageindex.com/mi12labs/SwiftAI

- Discord https://discord.com/invite/ckfVGE5r and slack https://mi12swiftai.slack.com/join/shared_invite/zt-3c3lr6da...

rafram an hour ago

You may get a letter from Apple’s lawyers because of the name - Swift and SwiftUI are trademarks, and this seems like something they’d want to keep for themselves.

  • andsoitis 8 minutes ago

    > Swift and SwiftUI are trademarks

    This is called SwiftAI, though.

    • reactordev 2 minutes ago

      If they can prove enough similarities or overlap with their brand, they’ll find the way. And since it targets macOS/iOS specifically, there you go.

jc4p 3 hours ago

I do a lot of AI work and right now the story for doing LLMs on iOS is very painful (but doing Whisper or etc is pretty nice) so this is existing and the API looks Swift native and great, I can't wait to use it!

Question/feature request: Is it possible to bring my own CoreML models over and use them? I honestly end up bundling llama.cpp and doing gguf right now because I can't figure out the setup for using CoreML models, would love for all of that to be abstracted away for me :)

  • mi12-root 2 hours ago

    That’s a good suggestion, and it indeed sounds like something we’d want to support. Could you help us better understand your use case? For example, where do you usually get the models (e.g., Hugging Face)? Do you fine-tune them? Do you mostly care about LLMs (since you only mentioned llama.cpp)?

    • jc4p 18 minutes ago

      Thank you! I’ve been fine tuning tiny Llama and Gemma models using transformers then exporting from the safetensors that spits out — My main use case is LLMs but I’ve also tried getting YOLO finetuned and other PyTorch models running and ran into similar problems, just seemed very confusing to figure out how to properly use the phone for this.

deanputney 4 hours ago

Awesome, this is a good idea! Having a nice wrapper to make LLM calls easier is very helpful too :)

Nice to see someone digging in on the system models. That's on my list to play with, but I haven't seen much new info on them or how they perform yet.

  • mi12-root 3 hours ago

    We’ve begun internally evaluating the model and will share our findings more in details later. So far, we’ve found that it performs well on tasks such as summarization, writing, and data extraction, and shows particular strength in areas like history and marketing. However, it struggles with STEM topics (e.g., math and physics), often fails to follow long or complex instructions, and sometimes avoids answering certain queries. If you want us to evaluate a certain use case or vertical, please share it with us!

keyle 3 hours ago

Needs more example on custom.

  • mi12-root 3 hours ago

    Thanks for the feedback! When you say “custom,” do you mean additional integrations with LLM providers, or more documentation on how to build your own custom integration? If you mean the former, we’re currently focused on stabilizing the API and reaching feature parity with FoundationModels (e.g., adding streaming). After that, we plan to add more integrations, such as Claude, Gemini, and on-device LLMs from Hugging Face.