- Added the Responses API according to the OpenAI documentation. It is currently only used by OpenAI, but we could use the API for other providers as soon as someone offers it. This means that all text-based LLMs from OpenAI can now be used in MindWork AI Studio. For example, the Deep Research models for comprehensive research tasks.
- Added support for web searches. Currently supported by some OpenAI models (e.g., GPT5, newer Omni models, Deep Research models) and Perplexity. Used sources are displayed visually in the chat interface.
- Improved the enterprise configuration of complex objects, such as chat templates: the corresponding Lua tables are now optional. This ensures existing enterprise configurations remain valid when new configuration options are added and don't require adjustments with every update.
- Fixed a rare chat-related bug that could occur when a workspace was not created correctly. Thank you, Naomi, for reporting this issue.
- Fixed a bug in the web content reader where a preconfigured LLM provider was not recognized, resulting in an error message.
- Fixed another bug in the web content reader: the system incorrectly evaluated whether the content cleaning agent was preselected. Users had to first deactivate the agent and then reactivate it to make it work correctly.
- Fixed a bug in the assistants for text summarization, translations, and legal check: the web content reader preferences were not being applied when resetting the assistant.