diff --git a/README.md b/README.md index 9c1b3a37..16b57720 100644 --- a/README.md +++ b/README.md @@ -15,6 +15,7 @@ Things we are currently working on: - [x] ~~App: Management of data sources (local & external data via [ERI](https://github.com/MindWorkAI/ERI)) (PR [#259](https://github.com/MindWorkAI/AI-Studio/pull/259), [#273](https://github.com/MindWorkAI/AI-Studio/pull/273))~~ - [x] ~~Runtime: Extract data from txt / md / pdf / docx / xlsx files (PR [#374](https://github.com/MindWorkAI/AI-Studio/pull/374))~~ - [ ] (*Optional*) Runtime: Implement internal embedding provider through [fastembed-rs](https://github.com/Anush008/fastembed-rs) + - [ ] App: Implement dialog for checking & handling [pandoc](https://pandoc.org/) installation ([PR #393](https://github.com/MindWorkAI/AI-Studio/pull/393)) - [ ] App: Implement external embedding providers - [ ] App: Implement the process to vectorize one local file using embeddings - [ ] Runtime: Integration of the vector database [LanceDB](https://github.com/lancedb/lancedb) @@ -24,26 +25,37 @@ Things we are currently working on: - [x] ~~App: Integrate data sources in chats (PR [#282](https://github.com/MindWorkAI/AI-Studio/pull/282))~~ -- Since September 2024: Experiments have been started on how we can work on long texts with AI Studio. Let's say you want to write a fantasy novel or create a complex project proposal and use LLM for support. The initial experiments were promising, but not yet satisfactory. We are testing further approaches until a satisfactory solution is found. The current state of our experiment is available as an experimental preview feature through your app configuration. Related PR: ~~[#167](https://github.com/MindWorkAI/AI-Studio/pull/167), [#226](https://github.com/MindWorkAI/AI-Studio/pull/226)~~. +- Since September 2024: Experiments have been started on how we can work on long texts with AI Studio. Let's say you want to write a fantasy novel or create a complex project proposal and use LLM for support. The initial experiments were promising, but not yet satisfactory. We are testing further approaches until a satisfactory solution is found. The current state of our experiment is available as an experimental preview feature through your app configuration. Related PR: ~~[PR #167](https://github.com/MindWorkAI/AI-Studio/pull/167), [PR #226](https://github.com/MindWorkAI/AI-Studio/pull/226)~~, [PR #376](https://github.com/MindWorkAI/AI-Studio/pull/376). +- Since March 2025: We have started developing the plugin system. There will be language plugins to offer AI Studio in other languages, configuration plugins to centrally manage certain providers and rules within an organization, and assistant plugins that allow anyone to develop their own assistants. We are using Lua as the plugin language: + - [x] ~~Plan & implement the base plugin system ([PR #322](https://github.com/MindWorkAI/AI-Studio/pull/322))~~ + - [x] ~~Start the plugin system ([PR #372](https://github.com/MindWorkAI/AI-Studio/pull/372))~~ + - [x] ~~Added hot-reload support for plugins ([PR #377](https://github.com/MindWorkAI/AI-Studio/pull/377), [PR #391](https://github.com/MindWorkAI/AI-Studio/pull/391))~~ + - [ ] Add support for other languages (I18N) to AI Studio (~~[PR #381](https://github.com/MindWorkAI/AI-Studio/pull/381), [PR #400](https://github.com/MindWorkAI/AI-Studio/pull/400)~~, [PR #404](https://github.com/MindWorkAI/AI-Studio/pull/404)) + - [ ] Add an I18N assistant to translate all AI Studio texts to a certain language & culture + - [ ] Provide MindWork AI Studio in German ([#31](https://github.com/MindWorkAI/Planning/issues/31)) + - [ ] Add configuration plugins, which allow pre-defining some LLM providers in organizations + - [ ] Add an app store for plugins, showcasing community-contributed plugins from public GitHub and GitLab repositories. This will enable AI Studio users to discover, install, and update plugins directly within the platform. + - [ ] Add assistant plugins Other News: -- October 2024: We've found the first two financial supporters. Huge thanks to `richard-stanton` and `peerschuett` for backing the project. Thanks for having the courage to be the first to support us. +- April 2025: We have two active financial supporters: Peer `peerschuett` and Dominic `donework`. Thank you very much for your support. MindWork AI reinvests these donations by passing them on to our AI Studio dependencies ([see here](https://github.com/orgs/MindWorkAI/sponsoring)). In the event that we receive large donations, we will first sign the app ([#56](https://github.com/MindWorkAI/Planning/issues/56)). In case we receive more donations, we will look for and pay staff to develop features for AI Studio. -- October 2024: The [German Aerospace Center (DLR)](https://en.wikipedia.org/wiki/German_Aerospace_Center) ([Website](https://www.dlr.de/en)) will use AI Studio at least within the scope of one project and will also contribute to its further development. This is great news. +- April 2025: The [German Aerospace Center (DLR)](https://en.wikipedia.org/wiki/German_Aerospace_Center) ([Website](https://www.dlr.de/en)) will use AI Studio at least within the scope of three projects and will also contribute to its further development. This is great news. Features we have recently released: +- v0.9.40: Added support for the `o4` models from OpenAI. Also, we added Alibaba Cloud & Hugging Face as LLM providers. +- v0.9.39: Added the plugin system as a preview feature. +- v0.9.31: Added Helmholtz & GWDG as LLM providers. This is a huge improvement for many researchers out there who can use these providers for free. We added DeepSeek as a provider as well. +- v0.9.29: Added agents to support the RAG process (selecting the best data sources & validating retrieved data as part of the augmentation process) +- v0.9.26+: Added RAG for external data sources using our [ERI interface](https://mindworkai.org/#eri---external-retrieval-interface) as a preview feature. - v0.9.25: Added [xAI](https://x.ai/) as a new provider. xAI provides their Grok models for generating content. - v0.9.23: Added support for OpenAI `o` models (`o1`, `o1-mini`, `o3`, etc.); added also an [ERI](https://github.com/MindWorkAI/ERI) server coding assistant as a preview feature behind the RAG feature flag. Your own ERI server can be used to gain access to, e.g., your enterprise data from within AI Studio. - v0.9.22: Added options for preview features; added embedding provider configuration for RAG (preview) and writer mode (experimental preview). - v0.9.18: Added the new Anthropic Heiku model; added Groq and Google Gemini as provider options. -- v0.9.17: Added the new Anthropic model `claude-3-5-sonnet-20241022`. -- v0.9.16: Added workspace display options & improved the layout of the app window. -- v0.9.15: Added the bias-of-the-day assistant. Tells you about a cognitive bias every day. -- v0.9.13: You can use `ollama` providers secured with API keys. ## What is AI Studio? @@ -56,7 +68,19 @@ MindWork AI Studio is a free desktop app for macOS, Windows, and Linux. It provi **Key advantages:** - **Free of charge**: The app is free to use, both for personal and commercial purposes. -- **Independence**: You are not tied to any single provider. Instead, you can choose the provider that best suits their needs. Right now, we support OpenAI (GPT4o, o1, etc.), Mistral, Anthropic (Claude), Google Gemini, xAI (Grok), DeepSeek, and self-hosted models using [llama.cpp](https://github.com/ggerganov/llama.cpp), [ollama](https://github.com/ollama/ollama), [LM Studio](https://lmstudio.ai/), [Groq](https://groq.com/), or [Fireworks](https://fireworks.ai/). For scientists and employees of research institutions, we also support [Helmholtz](https://helmholtz.cloud/services/?serviceID=d7d5c597-a2f6-4bd1-b71e-4d6499d98570) and [GWDG](https://gwdg.de/services/application-services/ai-services/) AI services. These are available through federated logins like eduGAIN to all 18 Helmholtz Centers, the Max Planck Society, most German, and many international universities. +- **Independence**: You are not tied to any single provider. Instead, you can choose the providers that best suit your needs. Right now, we support: + - [OpenAI](https://openai.com/) (GPT4o, GPT4.1, o1, o3, o4, etc.) + - [Mistral](https://mistral.ai/) + - [Anthropic](https://www.anthropic.com/) (Claude) + - [Google Gemini](https://gemini.google.com) + - [xAI](https://x.ai/) (Grok) + - [DeepSeek](https://www.deepseek.com/en) + - [Alibaba Cloud](https://www.alibabacloud.com) (Qwen) + - [Hugging Face](https://huggingface.co/) using their [inference providers](https://huggingface.co/docs/inference-providers/index) such as Cerebras, Nebius, Sambanova, Novita, Hyperbolic, Together AI, Fireworks, Hugging Face + - Self-hosted models using [llama.cpp](https://github.com/ggerganov/llama.cpp), [ollama](https://github.com/ollama/ollama), [LM Studio](https://lmstudio.ai/) + - [Groq](https://groq.com/) + - [Fireworks](https://fireworks.ai/) + - For scientists and employees of research institutions, we also support [Helmholtz](https://helmholtz.cloud/services/?serviceID=d7d5c597-a2f6-4bd1-b71e-4d6499d98570) and [GWDG](https://gwdg.de/services/application-services/ai-services/) AI services. These are available through federated logins like eduGAIN to all 18 Helmholtz Centers, the Max Planck Society, most German, and many international universities. - **Assistants**: You just want to quickly translate a text? AI Studio has so-called assistants for such and other tasks. No prompting is necessary when working with these assistants. - **Unrestricted usage**: Unlike services like ChatGPT, which impose limits after intensive use, MindWork AI Studio offers unlimited usage through the providers API. - **Cost-effective**: You only pay for what you use, which can be cheaper than monthly subscription services like ChatGPT Plus, especially if used infrequently. But beware, here be dragons: For extremely intensive usage, the API costs can be significantly higher. Unfortunately, providers currently do not offer a way to display current costs in the app. Therefore, check your account with the respective provider to see how your costs are developing. When available, use prepaid and set a cost limit. diff --git a/app/MindWork AI Studio/Components/Settings/SettingsPanelProviders.razor.cs b/app/MindWork AI Studio/Components/Settings/SettingsPanelProviders.razor.cs index 5a71925b..ce03a430 100644 --- a/app/MindWork AI Studio/Components/Settings/SettingsPanelProviders.razor.cs +++ b/app/MindWork AI Studio/Components/Settings/SettingsPanelProviders.razor.cs @@ -65,7 +65,7 @@ public partial class SettingsPanelProviders : SettingsPanelBase { x => x.IsSelfHosted, provider.IsSelfHosted }, { x => x.IsEditing, true }, { x => x.DataHost, provider.Host }, - { x => x.HfInstanceProviderId, provider.HFInstanceProvider }, + { x => x.HFInferenceProviderId, provider.HFInferenceProvider }, }; var dialogReference = await this.DialogService.ShowAsync("Edit LLM Provider", dialogParameters, DialogOptions.FULLSCREEN); diff --git a/app/MindWork AI Studio/Dialogs/ProviderDialog.razor b/app/MindWork AI Studio/Dialogs/ProviderDialog.razor index 1a01faba..3664062a 100644 --- a/app/MindWork AI Studio/Dialogs/ProviderDialog.razor +++ b/app/MindWork AI Studio/Dialogs/ProviderDialog.razor @@ -61,20 +61,22 @@ @if (this.DataLLMProvider.IsHFInstanceProviderNeeded()) { - - @foreach (HFInstanceProvider instanceProvider in Enum.GetValues(typeof(HFInstanceProvider))) + + @foreach (HFInferenceProvider inferenceProvider in Enum.GetValues(typeof(HFInferenceProvider))) { - @instanceProvider.ToName() + + @inferenceProvider.ToName() + } - Please double-check if your model name matches the curl specifications provided by the instance provider. If it doesn't, you might get a Not Found error when trying to use the model. Here's a curl example. + Please double-check if your model name matches the curl specifications provided by the inference provider. If it doesn't, you might get a Not Found error when trying to use the model. Here's a curl example. } @if (this.DataLLMProvider.IsLLMModelProvidedManually()) { - Show available models + Show available models [Parameter] - public HFInstanceProvider HfInstanceProviderId { get; set; } = HFInstanceProvider.NONE; + public HFInferenceProvider HFInferenceProviderId { get; set; } = HFInferenceProvider.NONE; /// /// Is this provider self-hosted? @@ -138,7 +138,7 @@ public partial class ProviderDialog : ComponentBase, ISecretId IsSelfHosted = this.DataLLMProvider is LLMProviders.SELF_HOSTED, Hostname = cleanedHostname.EndsWith('/') ? cleanedHostname[..^1] : cleanedHostname, Host = this.DataHost, - HFInstanceProvider = this.HfInstanceProviderId, + HFInferenceProvider = this.HFInferenceProviderId, }; } diff --git a/app/MindWork AI Studio/Pages/Home.razor.cs b/app/MindWork AI Studio/Pages/Home.razor.cs index 4468e651..6a5dfff0 100644 --- a/app/MindWork AI Studio/Pages/Home.razor.cs +++ b/app/MindWork AI Studio/Pages/Home.razor.cs @@ -39,7 +39,7 @@ public partial class Home : MSGComponentBase private static readonly TextItem[] ITEMS_ADVANTAGES = [ new("Free of charge", "The app is free to use, both for personal and commercial purposes."), - new("Independence", "You are not tied to any single provider. Instead, you might choose the provider that best suits your needs. Right now, we support OpenAI (GPT4o, o1, etc.), Mistral, Anthropic (Claude), Google Gemini, xAI (Grok), DeepSeek, and self-hosted models using llama.cpp, ollama, LM Studio, Groq, or Fireworks. For scientists and employees of research institutions, we also support Helmholtz and GWDG AI services. These are available through federated logins like eduGAIN to all 18 Helmholtz Centers, the Max Planck Society, most German, and many international universities."), + new("Independence", "You are not tied to any single provider. Instead, you might choose the provider that best suits your needs. Right now, we support OpenAI (GPT4o, o1, etc.), Mistral, Anthropic (Claude), Google Gemini, xAI (Grok), DeepSeek, Alibaba Cloud (Qwen), Hugging Face, and self-hosted models using llama.cpp, ollama, LM Studio, Groq, or Fireworks. For scientists and employees of research institutions, we also support Helmholtz and GWDG AI services. These are available through federated logins like eduGAIN to all 18 Helmholtz Centers, the Max Planck Society, most German, and many international universities."), new("Assistants", "You just want to quickly translate a text? AI Studio has so-called assistants for such and other tasks. No prompting is necessary when working with these assistants."), new("Unrestricted usage", "Unlike services like ChatGPT, which impose limits after intensive use, MindWork AI Studio offers unlimited usage through the providers API."), new("Cost-effective", "You only pay for what you use, which can be cheaper than monthly subscription services like ChatGPT Plus, especially if used infrequently. But beware, here be dragons: For extremely intensive usage, the API costs can be significantly higher. Unfortunately, providers currently do not offer a way to display current costs in the app. Therefore, check your account with the respective provider to see how your costs are developing. When available, use prepaid and set a cost limit."), diff --git a/app/MindWork AI Studio/Provider/HuggingFace/HFInstanceProvider.cs b/app/MindWork AI Studio/Provider/HuggingFace/HFInferenceProvider.cs similarity index 70% rename from app/MindWork AI Studio/Provider/HuggingFace/HFInstanceProvider.cs rename to app/MindWork AI Studio/Provider/HuggingFace/HFInferenceProvider.cs index 63221290..01b722eb 100644 --- a/app/MindWork AI Studio/Provider/HuggingFace/HFInstanceProvider.cs +++ b/app/MindWork AI Studio/Provider/HuggingFace/HFInferenceProvider.cs @@ -1,9 +1,9 @@ namespace AIStudio.Provider.HuggingFace; /// -/// Enum for instance providers that Hugging Face supports. +/// Enum for inference providers that Hugging Face supports. /// -public enum HFInstanceProvider +public enum HFInferenceProvider { NONE, diff --git a/app/MindWork AI Studio/Provider/HuggingFace/HFInferenceProviderExtensions.cs b/app/MindWork AI Studio/Provider/HuggingFace/HFInferenceProviderExtensions.cs new file mode 100644 index 00000000..0e103938 --- /dev/null +++ b/app/MindWork AI Studio/Provider/HuggingFace/HFInferenceProviderExtensions.cs @@ -0,0 +1,43 @@ +namespace AIStudio.Provider.HuggingFace; + +public static class HFInferenceProviderExtensions +{ + public static string Endpoints(this HFInferenceProvider provider, Model model) => provider switch + { + HFInferenceProvider.CEREBRAS => "cerebras/v1/", + HFInferenceProvider.NEBIUS_AI_STUDIO => "nebius/v1/", + HFInferenceProvider.SAMBANOVA => "sambanova/v1/", + HFInferenceProvider.NOVITA => "novita/v3/openai/", + HFInferenceProvider.HYPERBOLIC => "hyperbolic/v1/", + HFInferenceProvider.TOGETHER_AI => "together/v1/", + HFInferenceProvider.FIREWORKS => "fireworks-ai/inference/v1/", + HFInferenceProvider.HF_INFERENCE_API => $"hf-inference/models/{model.ToString()}/v1/", + _ => string.Empty, + }; + + public static string EndpointsId(this HFInferenceProvider provider) => provider switch + { + HFInferenceProvider.CEREBRAS => "cerebras", + HFInferenceProvider.NEBIUS_AI_STUDIO => "nebius", + HFInferenceProvider.SAMBANOVA => "sambanova", + HFInferenceProvider.NOVITA => "novita", + HFInferenceProvider.HYPERBOLIC => "hyperbolic", + HFInferenceProvider.TOGETHER_AI => "together", + HFInferenceProvider.FIREWORKS => "fireworks", + HFInferenceProvider.HF_INFERENCE_API => "hf-inference", + _ => string.Empty, + }; + + public static string ToName(this HFInferenceProvider provider) => provider switch + { + HFInferenceProvider.CEREBRAS => "Cerebras", + HFInferenceProvider.NEBIUS_AI_STUDIO => "Nebius AI Studio", + HFInferenceProvider.SAMBANOVA => "Sambanova", + HFInferenceProvider.NOVITA => "Novita", + HFInferenceProvider.HYPERBOLIC => "Hyperbolic", + HFInferenceProvider.TOGETHER_AI => "Together AI", + HFInferenceProvider.FIREWORKS => "Fireworks AI", + HFInferenceProvider.HF_INFERENCE_API => "Hugging Face Inference API", + _ => string.Empty, + }; +} \ No newline at end of file diff --git a/app/MindWork AI Studio/Provider/HuggingFace/HFInstanceProviderExtensions.cs b/app/MindWork AI Studio/Provider/HuggingFace/HFInstanceProviderExtensions.cs deleted file mode 100644 index b0d81fba..00000000 --- a/app/MindWork AI Studio/Provider/HuggingFace/HFInstanceProviderExtensions.cs +++ /dev/null @@ -1,43 +0,0 @@ -namespace AIStudio.Provider.HuggingFace; - -public static class HFInstanceProviderExtensions -{ - public static string Endpoints(this HFInstanceProvider provider, Model model) => provider switch - { - HFInstanceProvider.CEREBRAS => "cerebras/v1/", - HFInstanceProvider.NEBIUS_AI_STUDIO => "nebius/v1/", - HFInstanceProvider.SAMBANOVA => "sambanova/v1/", - HFInstanceProvider.NOVITA => "novita/v3/openai/", - HFInstanceProvider.HYPERBOLIC => "hyperbolic/v1/", - HFInstanceProvider.TOGETHER_AI => "together/v1/", - HFInstanceProvider.FIREWORKS => "fireworks-ai/inference/v1/", - HFInstanceProvider.HF_INFERENCE_API => $"hf-inference/models/{model.ToString()}/v1/", - _ => string.Empty, - }; - - public static string EndpointsId(this HFInstanceProvider provider) => provider switch - { - HFInstanceProvider.CEREBRAS => "cerebras", - HFInstanceProvider.NEBIUS_AI_STUDIO => "nebius", - HFInstanceProvider.SAMBANOVA => "sambanova", - HFInstanceProvider.NOVITA => "novita", - HFInstanceProvider.HYPERBOLIC => "hyperbolic", - HFInstanceProvider.TOGETHER_AI => "together", - HFInstanceProvider.FIREWORKS => "fireworks", - HFInstanceProvider.HF_INFERENCE_API => "hf-inference", - _ => string.Empty, - }; - - public static string ToName(this HFInstanceProvider provider) => provider switch - { - HFInstanceProvider.CEREBRAS => "Cerebras", - HFInstanceProvider.NEBIUS_AI_STUDIO => "Nebius AI Studio", - HFInstanceProvider.SAMBANOVA => "Sambanova", - HFInstanceProvider.NOVITA => "Novita", - HFInstanceProvider.HYPERBOLIC => "Hyperbolic", - HFInstanceProvider.TOGETHER_AI => "Together AI", - HFInstanceProvider.FIREWORKS => "Fireworks AI", - HFInstanceProvider.HF_INFERENCE_API => "Hugging Face Inference API", - _ => string.Empty, - }; -} \ No newline at end of file diff --git a/app/MindWork AI Studio/Provider/HuggingFace/ProviderHuggingFace.cs b/app/MindWork AI Studio/Provider/HuggingFace/ProviderHuggingFace.cs index 25f2baae..e98de1f9 100644 --- a/app/MindWork AI Studio/Provider/HuggingFace/ProviderHuggingFace.cs +++ b/app/MindWork AI Studio/Provider/HuggingFace/ProviderHuggingFace.cs @@ -11,9 +11,9 @@ namespace AIStudio.Provider.HuggingFace; public sealed class ProviderHuggingFace : BaseProvider { - public ProviderHuggingFace(ILogger logger, HFInstanceProvider hfProvider, Model model) : base($"https://router.huggingface.co/{hfProvider.Endpoints(model)}", logger) + public ProviderHuggingFace(ILogger logger, HFInferenceProvider hfProvider, Model model) : base($"https://router.huggingface.co/{hfProvider.Endpoints(model)}", logger) { - logger.LogInformation($"We use the instance provider '{hfProvider}'. Thus we use the base URL 'https://router.huggingface.co/{hfProvider.Endpoints(model)}'."); + logger.LogInformation($"We use the inferende provider '{hfProvider}'. Thus we use the base URL 'https://router.huggingface.co/{hfProvider.Endpoints(model)}'."); } #region Implementation of IProvider diff --git a/app/MindWork AI Studio/Provider/LLMProvidersExtensions.cs b/app/MindWork AI Studio/Provider/LLMProvidersExtensions.cs index 8abb0bd4..68a62297 100644 --- a/app/MindWork AI Studio/Provider/LLMProvidersExtensions.cs +++ b/app/MindWork AI Studio/Provider/LLMProvidersExtensions.cs @@ -135,7 +135,7 @@ public static class LLMProvidersExtensions /// The provider instance. public static IProvider CreateProvider(this AIStudio.Settings.Provider providerSettings, ILogger logger) { - return providerSettings.UsedLLMProvider.CreateProvider(providerSettings.InstanceName, providerSettings.Host, providerSettings.Hostname, providerSettings.Model, providerSettings.HFInstanceProvider ,logger); + return providerSettings.UsedLLMProvider.CreateProvider(providerSettings.InstanceName, providerSettings.Host, providerSettings.Hostname, providerSettings.Model, providerSettings.HFInferenceProvider ,logger); } /// @@ -146,10 +146,10 @@ public static class LLMProvidersExtensions /// The provider instance. public static IProvider CreateProvider(this EmbeddingProvider embeddingProviderSettings, ILogger logger) { - return embeddingProviderSettings.UsedLLMProvider.CreateProvider(embeddingProviderSettings.Name, embeddingProviderSettings.Host, embeddingProviderSettings.Hostname, embeddingProviderSettings.Model, HFInstanceProvider.NONE,logger); + return embeddingProviderSettings.UsedLLMProvider.CreateProvider(embeddingProviderSettings.Name, embeddingProviderSettings.Host, embeddingProviderSettings.Hostname, embeddingProviderSettings.Model, HFInferenceProvider.NONE,logger); } - private static IProvider CreateProvider(this LLMProviders provider, string instanceName, Host host, string hostname, Model model, HFInstanceProvider instanceProvider , ILogger logger) + private static IProvider CreateProvider(this LLMProviders provider, string instanceName, Host host, string hostname, Model model, HFInferenceProvider inferenceProvider , ILogger logger) { try { @@ -165,7 +165,7 @@ public static class LLMProvidersExtensions LLMProviders.GROQ => new ProviderGroq(logger) { InstanceName = instanceName }, LLMProviders.FIREWORKS => new ProviderFireworks(logger) { InstanceName = instanceName }, - LLMProviders.HUGGINGFACE => new ProviderHuggingFace(logger, instanceProvider, model) { InstanceName = instanceName }, + LLMProviders.HUGGINGFACE => new ProviderHuggingFace(logger, inferenceProvider, model) { InstanceName = instanceName }, LLMProviders.SELF_HOSTED => new ProviderSelfHosted(logger, host, hostname) { InstanceName = instanceName }, @@ -234,10 +234,10 @@ public static class LLMProvidersExtensions _ => false, }; - public static string GetModelsOverviewURL(this LLMProviders provider, HFInstanceProvider instanceProvider) => provider switch + public static string GetModelsOverviewURL(this LLMProviders provider, HFInferenceProvider inferenceProvider) => provider switch { LLMProviders.FIREWORKS => "https://fireworks.ai/models?show=Serverless", - LLMProviders.HUGGINGFACE => $"https://huggingface.co/models?inference_provider={instanceProvider.EndpointsId()}", + LLMProviders.HUGGINGFACE => $"https://huggingface.co/models?inference_provider={inferenceProvider.EndpointsId()}", _ => string.Empty, }; diff --git a/app/MindWork AI Studio/Settings/Provider.cs b/app/MindWork AI Studio/Settings/Provider.cs index 6aefc5b5..33d39d3d 100644 --- a/app/MindWork AI Studio/Settings/Provider.cs +++ b/app/MindWork AI Studio/Settings/Provider.cs @@ -25,7 +25,7 @@ public readonly record struct Provider( bool IsSelfHosted = false, string Hostname = "http://localhost:1234", Host Host = Host.NONE, - HFInstanceProvider HFInstanceProvider = HFInstanceProvider.NONE) : ISecretId + HFInferenceProvider HFInferenceProvider = HFInferenceProvider.NONE) : ISecretId { #region Overrides of ValueType diff --git a/app/MindWork AI Studio/Tools/Validation/ProviderValidation.cs b/app/MindWork AI Studio/Tools/Validation/ProviderValidation.cs index 12d27b43..9205b1ef 100644 --- a/app/MindWork AI Studio/Tools/Validation/ProviderValidation.cs +++ b/app/MindWork AI Studio/Tools/Validation/ProviderValidation.cs @@ -94,13 +94,13 @@ public sealed class ProviderValidation return null; } - public string? ValidatingHFInstanceProvider(HFInstanceProvider instanceProvider) + public string? ValidatingHFInstanceProvider(HFInferenceProvider inferenceProvider) { if(this.GetProvider() is not LLMProviders.HUGGINGFACE) return null; - if (instanceProvider is HFInstanceProvider.NONE) - return "Please select an Hugging Face instance provider."; + if (inferenceProvider is HFInferenceProvider.NONE) + return "Please select an Hugging Face inference provider."; return null; } diff --git a/app/MindWork AI Studio/wwwroot/changelog/v0.9.40.md b/app/MindWork AI Studio/wwwroot/changelog/v0.9.40.md index 29426981..a34d2872 100644 --- a/app/MindWork AI Studio/wwwroot/changelog/v0.9.40.md +++ b/app/MindWork AI Studio/wwwroot/changelog/v0.9.40.md @@ -2,6 +2,7 @@ - Added support for the announced OpenAI `o4` models. - Added Alibaba Cloud as a new provider. Thanks Peer `peerschuett` for the contribution. - Added the Hugging Face inference provider as an LLM provider to AI Studio. Thanks Peer `peerschuett` for the contribution. +- Added the current CPU architecture to the about page. This information helps us identify issues. - Improved the LLM & embedding provider dialogs by hiding not relevant options. - Improved the provider selection by showing the name of the provider in the provider selection instead of its identifier. - Improved the developer experience by adding a tolerant enum converter for better configuration handling. diff --git a/documentation/Setup.md b/documentation/Setup.md index b06b6cbc..3a3af745 100644 --- a/documentation/Setup.md +++ b/documentation/Setup.md @@ -14,7 +14,7 @@ AI Studio is only available for modern 64-bit Windows systems. When you have an - **Intel/AMD:** In almost all other cases, you have an Intel/AMD system. [Download the x64 version](https://github.com/MindWorkAI/AI-Studio/releases/latest/download/MindWork.AI.Studio_x64-setup.exe) of AI Studio. -When you try to install the app, you get a message regarding protection of your PC (see screenshots below). For Windows to trust our app, we need to purchase a certificate that costs around $1000 per year. Would you like to help us with this? [Please consider supporting us](https://github.com/sponsors/MindWorkAI). You might want to [visit our release page](https://github.com/MindWorkAI/AI-Studio/releases/latest). There, we provide VirusTotal scan results for each release. If you are unsure about the safety of the app, you can check the results there. Ensure that the majority of scanners have a green checkmark. +When you try to install the app, you get a message regarding protection of your PC (see screenshots below). For Windows to trust our app, we need to purchase a certificate that [costs around $1000 per year](https://github.com/MindWorkAI/Planning/issues/56). Would you like to help us with this? [Please consider supporting us](https://github.com/sponsors/MindWorkAI). You might want to [visit our release page](https://github.com/MindWorkAI/AI-Studio/releases/latest). There, we provide VirusTotal scan results for each release. If you are unsure about the safety of the app, you can check the results there. Ensure that the majority of scanners have a green checkmark. When you are confident in the app's safety, click on "More info" and then "Run anyway" to proceed with the installation: @@ -43,7 +43,7 @@ When you try to open the app, you get a message that the app is damaged: ![macOS Installation 2](macOS%20Damage.png) -This is because we don't have an Apple Developer account, which costs around $100 per year. Would you like to help us with this? [Please consider supporting us](https://github.com/sponsors/MindWorkAI). You might want to [visit our release page](https://github.com/MindWorkAI/AI-Studio/releases/latest). There, we provide VirusTotal scan results for each release. If you are unsure about the safety of the app, you can check the results there. Ensure that the majority of scanners have a green checkmark. +This is because we don't have an Apple Developer account, [which costs around $100 per year](https://github.com/MindWorkAI/Planning/issues/56). Would you like to help us with this? [Please consider supporting us](https://github.com/sponsors/MindWorkAI). You might want to [visit our release page](https://github.com/MindWorkAI/AI-Studio/releases/latest). There, we provide VirusTotal scan results for each release. If you are unsure about the safety of the app, you can check the results there. Ensure that the majority of scanners have a green checkmark. When you are confident in the app's safety, follow these steps: diff --git a/metadata.txt b/metadata.txt index c1dcbe32..11d63db7 100644 --- a/metadata.txt +++ b/metadata.txt @@ -7,4 +7,4 @@ 8.5.1 1.8.1 19935769035, release -osx-arm64 +osx-arm64 \ No newline at end of file