pyoner Profile Banner
Askar Yusupov Profile
Askar Yusupov

@pyoner

Followers
357
Following
38K
Media
4K
Statuses
17K

Builder by day, storytelling writer by night—sharing threads on AI, tech, crypto, and code.

Joined November 2010
Don't wanna be here? Send us removal request.
@pyoner
Askar Yusupov
3 months
1/8 Explore typed-prompt, a collection of modular TypeScript packages for building composable, strongly-typed prompt engineering solutions in AI applications. Find it here: via @xcomposer_co ⬇️.
1
0
1
@pyoner
Askar Yusupov
7 minutes
6/6 While the system is functional, some issues still need to be resolved. One notable quirk is its interaction with the car's immobilizer. The Pi must boot up before the car starts, otherwise, the engine shuts off and requires a key cycle to restart. This effectively creates a.
0
0
0
@pyoner
Askar Yusupov
7 minutes
5/6 The Pi's 3.5mm jack is connected to the stereo's CVBS input using a 3.5 to RCA cable. He humorously notes that the red or white cable is used because his jack is faulty and only works when partially inserted. This unconventional setup ensures his crucial diagnostics remain.
1
0
0
@pyoner
Askar Yusupov
7 minutes
4/6 His current setup involves a Raspberry Pi taped to the ECU, connected to the OBD port via a VAG KKL cable. Power is supplied from the stereo's USB port, though this often triggers undervoltage warnings. An Ethernet connection allows him to SSH into the Pi for tinkering and.
1
0
0
@pyoner
Askar Yusupov
7 minutes
3/6 A breakthrough came when he discovered his car stereo had a 'CVBS IN' input for RCA composite video. This sparked the idea of connecting a display to it. To avoid embedded coding, he opted for a Raspberry Pi to run the Rust code, leveraging its comfortable Linux environment.
1
0
0
@pyoner
Askar Yusupov
7 minutes
2/6 The motivation behind `suzui-rs` stemmed from a desire to view engine parameters in his car without needing a laptop. Initially, he considered embedded development with ESP32, but found it too complex for his needs. He then pondered if he could build it in Rust, realizing a.
1
0
0
@pyoner
Askar Yusupov
7 minutes
1/6 Shehriyar Qureshi, known as @thatdevsherry, has developed an innovative Suzuki Serial Data Line (SDL) viewer using Rust. This project, named `suzui-rs`, offers an oxidized version of his original prototype. You can explore the project further at via.
1
0
1
@pyoner
Askar Yusupov
16 hours
17/17 Unlike many other frameworks that define LLM tools and interoperable tools separately, Spring AI natively bridges them. This eliminates duplication and extra wiring, streamlining development. Muthukumaran Navaneethakrishnan's blog post is based on Chapter 5 of his book,.
0
0
0
@pyoner
Askar Yusupov
16 hours
16/17 A bonus feature of Spring AI is its ability to make tools work beyond chat, such as within other agents or frontend clients, with no extra code. By adding the MCP server starter, every `@Tool` method becomes an MCP-compliant endpoint. This provides instant interoperability
Tweet media one
1
0
0
@pyoner
Askar Yusupov
16 hours
15/17 Spring AI also offers compatibility with various LLM providers, including OpenAI, Mistral, and Gemini. It integrates well with Spring Boot features like dependency injection, validation, and observability. This comprehensive support makes it a robust choice for tool calling.
1
0
0
@pyoner
Askar Yusupov
16 hours
14/17 With Spring AI, developers can define tools using simple annotations, such as `@Tool`. This approach automatically generates tool schemas and handles argument binding. The framework manages message state and orchestrates parallel and sequential tool calls seamlessly. ⬇️
Tweet media one
Tweet media two
1
0
0
@pyoner
Askar Yusupov
16 hours
13/17 Spring AI simplifies the tool calling process by handling much of the underlying complexity. It speaks the same REST protocol but abstracts away the manual work, allowing developers to focus on business logic. This significantly reduces the boilerplate code required. ⬇️.
1
0
0
@pyoner
Askar Yusupov
16 hours
12/17 Manually implementing tool calling can be challenging due to the need to write JSON schemas, track tool call IDs, parse arguments, and handle multi-tool orchestration. It also involves maintaining conversation history and injecting system prompts. This complexity highlights.
1
0
0
@pyoner
Askar Yusupov
16 hours
11/17 LLMs can also engage in sequential reasoning, performing actions step-by-step to fulfill a request. An example is dynamic SQL generation, where the model might first list tables, then get the schema, and finally run the SQL query. This demonstrates a more intricate
Tweet media one
1
0
0
@pyoner
Askar Yusupov
16 hours
10/17 When a user's query requires multiple pieces of information, the model can return multiple tool calls in parallel. This multi-tool capability allows for more complex queries, such as comparing different products. The diagram visually represents this parallel execution. ⬇️.
1
0
0
@pyoner
Askar Yusupov
16 hours
9/17 A single tool call involves the LLM selecting one tool and making a function call with specific parameters. This direct interaction allows the LLM to retrieve precise information needed to address a user's request. The diagram illustrates this straightforward flow. ⬇️
Tweet media one
1
0
0
@pyoner
Askar Yusupov
16 hours
8/17 Finally, the LLM processes the tool result and generates a final answer for the user. This completes the tool calling loop, demonstrating how LLMs can leverage external functions to provide more informed and actionable responses. This seamless integration enhances the LLM's
Tweet media one
1
0
0
@pyoner
Askar Yusupov
16 hours
7/17 The serialized tool result is then sent back to the LLM. This provides the model with the necessary information to formulate a complete and accurate response to the user's initial query. The conversation history, including the tool call and its result, is maintained for
Tweet media one
1
0
0
@pyoner
Askar Yusupov
16 hours
6/17 Upon receiving the tool call, you execute the specified function within your system. The result of this execution, such as product details and stock levels, is then serialized into a JSON format. This structured output is crucial for the next step in the tool calling
Tweet media one
Tweet media two
1
0
0
@pyoner
Askar Yusupov
16 hours
5/17 Next, the model responds with a tool call, indicating the specific function it wishes to execute and the arguments required. For instance, if a user asks about product stock, the model might request a 'findProductByName' function with 'AirPods Pro' as the argument. This
Tweet media one
1
0
0
@pyoner
Askar Yusupov
16 hours
4/17 The process begins by sending a user prompt along with tool definitions to the LLM provider's chat completions endpoint. This initial request informs the LLM about the available functions it can utilize. The model then analyzes the user's query and the provided tools to
Tweet media one
1
0
0