Erik Hornberger Profile
Erik Hornberger

@mopperson

Followers
336
Following
479
Media
5
Statuses
96

SwiftAI Frameworks Engineer at Apple

Pittsburgh, PA
Joined January 2017
Don't wanna be here? Send us removal request.
@mopperson
Erik Hornberger
6 days
RT @ShunTakeishi: 9/11だけではありますが、Foundation Model フレームワークのエンジニアも参加してくれることになりました。もし直接エンジニアと話したいって方は9/11にぜひご応募ください。また、9/9,10の枠も、もちろんお待ちしております。.
0
6
0
@mopperson
Erik Hornberger
13 days
If you’re passionate about the intersection of AI and API design, there is arguably no better team than this one. Opportunities to be part of something this big are few and far between.
@rxwei
Richard Wei
13 days
The team that built the Foundation Models framework is hiring! Join us to create the best LLM APIs, both for app developers and for Apple teams.
1
0
3
@grok
Grok
8 days
Join millions who have switched to Grok.
221
461
3K
@mopperson
Erik Hornberger
22 days
The update also contains optimizations to the inference stack that improve token throughput. I'm curious to see how noticeable the difference is in your use cases. (6/6).
1
0
19
@mopperson
Erik Hornberger
22 days
GeneratedContent is no longer opaque. It now has a kind property that you can inspect. This makes it possible to create views that display arbitrary generated content, which can be useful when working with user configurable generation schemas. (5/6).
Tweet card summary image
developer.apple.com
A representation of the different types of content that can be stored in .
3
1
19
@mopperson
Erik Hornberger
22 days
We've added a new feedback button that appears when you use #.Playground. It lets you submit feedback about model behavior right from Xcode. Please make ample use of it! (4/6).
1
0
17
@mopperson
Erik Hornberger
22 days
A new “Refusal” error that allows the model to explain why it can’t provide an answer, even when using guided generation. Structured refusals make it possible to apply special UI treatment to responses that don't contain an answer. (3/6).
Tweet card summary image
developer.apple.com
An error that happens when the session refuses the request.
2
0
21
@mopperson
Erik Hornberger
22 days
A new “permissiveContentTransformations” option for guardrails. It is meant for use cases like summarizing or performing style adjustments on content with potentially sensitive topics, such as articles about politics. (2/6).
Tweet card summary image
developer.apple.com
Guardrails that allow for permissively transforming text input, including potentially unsafe content, to text responses, such as summarizing an article.
2
0
22
@mopperson
Erik Hornberger
22 days
iOS 26 Beta 5 dropped today and contains some exciting additions to the Foundation Models framework. (1/6).
1
35
252
@mopperson
Erik Hornberger
29 days
Very clever idea, I love to see stuff like this! You can also add a slider to your preview view to scrub back and forth across different degrees of partially generated states. by @iosartem.
Tweet card summary image
artemnovichkov.com
Learn how to use Foundation Models guided generation in Xcode previews
1
0
3
@mopperson
Erik Hornberger
2 months
When using Option 2, no rollback happens. The model's response appears in the transcript and can be referenced just like any other session context.
0
0
2
@mopperson
Erik Hornberger
2 months
If you’re using unstructured natural language output to convey information, Option 1 and Option 2 may both be on the table. Option 2 lets the model explain the error itself. The model is able to reason about and explain what went wrong. It’s pretty meta.
1
0
4
@mopperson
Erik Hornberger
2 months
Errors thrown during a tool call will be rethrown from `session.respond(to:)` so that you can apply a proper UI treatment there. The session's transcript will also roll back to a good state so that it's safe to try again.
1
0
2
@mopperson
Erik Hornberger
2 months
If you're using Guided Generation, you generally want Option 1. The generated type often isn't structured in a way that the model can paint in an error explanations, and the UI for it probably isn't suitable for displaying errors either.
1
0
2
@mopperson
Erik Hornberger
2 months
Quiz: When would you want to use each of these?.Give it a good think before checking the answer!
Tweet media one
1
0
8
@mopperson
Erik Hornberger
3 months
One of the sleeper features hidden in the FoundationModels framework is the ability to specify random seeds. If you keep a record of which seed you used, it should be possible to reproduce model responses, even when using random sampling.
0
2
7
@mopperson
Erik Hornberger
3 months
This particular example is a flex made in jest. What's really happening here is that the model is making an educated, but lucky guess. With a different prompt or sampling parameters, it can miss the mark, just like all other LLMs.
1
0
0
@mopperson
Erik Hornberger
3 months
Ultimately text gets tokenized into vectors of integers, and the model never actually sees any letters. Getting the correct answer consistently would require the model to divine how many of each letter is encoded into each token in its vocabulary.
1
0
0
@mopperson
Erik Hornberger
3 months
For the uninitiated, there is a meme that even the biggest and best frontier models can't correctly count the number of r's in strawberry. Much has been written about why, but it boils down to how LLMs represent text.
1
0
0
@mopperson
Erik Hornberger
3 months
The only benchmark that really matters. 😜
Tweet media one
3
2
10
@mopperson
Erik Hornberger
3 months
Inspecting the transcript allows you to render views for more than just prompts and responses. It also gives you access to instructions, tool calls, and tool output, which can be helpful for debugging or being transparent about when tools are used.
0
0
2