mirror of
https://github.com/MindWorkAI/AI-Studio.git
synced 2025-04-27 22:59:47 +00:00
Updated documentation (#409)
This commit is contained in:
parent
c10c084da4
commit
2144cfe059
40
README.md
40
README.md
@ -15,6 +15,7 @@ Things we are currently working on:
|
|||||||
- [x] ~~App: Management of data sources (local & external data via [ERI](https://github.com/MindWorkAI/ERI)) (PR [#259](https://github.com/MindWorkAI/AI-Studio/pull/259), [#273](https://github.com/MindWorkAI/AI-Studio/pull/273))~~
|
- [x] ~~App: Management of data sources (local & external data via [ERI](https://github.com/MindWorkAI/ERI)) (PR [#259](https://github.com/MindWorkAI/AI-Studio/pull/259), [#273](https://github.com/MindWorkAI/AI-Studio/pull/273))~~
|
||||||
- [x] ~~Runtime: Extract data from txt / md / pdf / docx / xlsx files (PR [#374](https://github.com/MindWorkAI/AI-Studio/pull/374))~~
|
- [x] ~~Runtime: Extract data from txt / md / pdf / docx / xlsx files (PR [#374](https://github.com/MindWorkAI/AI-Studio/pull/374))~~
|
||||||
- [ ] (*Optional*) Runtime: Implement internal embedding provider through [fastembed-rs](https://github.com/Anush008/fastembed-rs)
|
- [ ] (*Optional*) Runtime: Implement internal embedding provider through [fastembed-rs](https://github.com/Anush008/fastembed-rs)
|
||||||
|
- [ ] App: Implement dialog for checking & handling [pandoc](https://pandoc.org/) installation ([PR #393](https://github.com/MindWorkAI/AI-Studio/pull/393))
|
||||||
- [ ] App: Implement external embedding providers
|
- [ ] App: Implement external embedding providers
|
||||||
- [ ] App: Implement the process to vectorize one local file using embeddings
|
- [ ] App: Implement the process to vectorize one local file using embeddings
|
||||||
- [ ] Runtime: Integration of the vector database [LanceDB](https://github.com/lancedb/lancedb)
|
- [ ] Runtime: Integration of the vector database [LanceDB](https://github.com/lancedb/lancedb)
|
||||||
@ -24,26 +25,37 @@ Things we are currently working on:
|
|||||||
- [x] ~~App: Integrate data sources in chats (PR [#282](https://github.com/MindWorkAI/AI-Studio/pull/282))~~
|
- [x] ~~App: Integrate data sources in chats (PR [#282](https://github.com/MindWorkAI/AI-Studio/pull/282))~~
|
||||||
|
|
||||||
|
|
||||||
- Since September 2024: Experiments have been started on how we can work on long texts with AI Studio. Let's say you want to write a fantasy novel or create a complex project proposal and use LLM for support. The initial experiments were promising, but not yet satisfactory. We are testing further approaches until a satisfactory solution is found. The current state of our experiment is available as an experimental preview feature through your app configuration. Related PR: ~~[#167](https://github.com/MindWorkAI/AI-Studio/pull/167), [#226](https://github.com/MindWorkAI/AI-Studio/pull/226)~~.
|
- Since September 2024: Experiments have been started on how we can work on long texts with AI Studio. Let's say you want to write a fantasy novel or create a complex project proposal and use LLM for support. The initial experiments were promising, but not yet satisfactory. We are testing further approaches until a satisfactory solution is found. The current state of our experiment is available as an experimental preview feature through your app configuration. Related PR: ~~[PR #167](https://github.com/MindWorkAI/AI-Studio/pull/167), [PR #226](https://github.com/MindWorkAI/AI-Studio/pull/226)~~, [PR #376](https://github.com/MindWorkAI/AI-Studio/pull/376).
|
||||||
|
|
||||||
|
- Since March 2025: We have started developing the plugin system. There will be language plugins to offer AI Studio in other languages, configuration plugins to centrally manage certain providers and rules within an organization, and assistant plugins that allow anyone to develop their own assistants. We are using Lua as the plugin language:
|
||||||
|
- [x] ~~Plan & implement the base plugin system ([PR #322](https://github.com/MindWorkAI/AI-Studio/pull/322))~~
|
||||||
|
- [x] ~~Start the plugin system ([PR #372](https://github.com/MindWorkAI/AI-Studio/pull/372))~~
|
||||||
|
- [x] ~~Added hot-reload support for plugins ([PR #377](https://github.com/MindWorkAI/AI-Studio/pull/377), [PR #391](https://github.com/MindWorkAI/AI-Studio/pull/391))~~
|
||||||
|
- [ ] Add support for other languages (I18N) to AI Studio (~~[PR #381](https://github.com/MindWorkAI/AI-Studio/pull/381), [PR #400](https://github.com/MindWorkAI/AI-Studio/pull/400)~~, [PR #404](https://github.com/MindWorkAI/AI-Studio/pull/404))
|
||||||
|
- [ ] Add an I18N assistant to translate all AI Studio texts to a certain language & culture
|
||||||
|
- [ ] Provide MindWork AI Studio in German ([#31](https://github.com/MindWorkAI/Planning/issues/31))
|
||||||
|
- [ ] Add configuration plugins, which allow pre-defining some LLM providers in organizations
|
||||||
|
- [ ] Add an app store for plugins, showcasing community-contributed plugins from public GitHub and GitLab repositories. This will enable AI Studio users to discover, install, and update plugins directly within the platform.
|
||||||
|
- [ ] Add assistant plugins
|
||||||
|
|
||||||
Other News:
|
Other News:
|
||||||
|
|
||||||
- October 2024: We've found the first two financial supporters. Huge thanks to `richard-stanton` and `peerschuett` for backing the project. Thanks for having the courage to be the first to support us.
|
- April 2025: We have two active financial supporters: Peer `peerschuett` and Dominic `donework`. Thank you very much for your support. MindWork AI reinvests these donations by passing them on to our AI Studio dependencies ([see here](https://github.com/orgs/MindWorkAI/sponsoring)). In the event that we receive large donations, we will first sign the app ([#56](https://github.com/MindWorkAI/Planning/issues/56)). In case we receive more donations, we will look for and pay staff to develop features for AI Studio.
|
||||||
|
|
||||||
- October 2024: The [German Aerospace Center (DLR)](https://en.wikipedia.org/wiki/German_Aerospace_Center) ([Website](https://www.dlr.de/en)) will use AI Studio at least within the scope of one project and will also contribute to its further development. This is great news.
|
- April 2025: The [German Aerospace Center (DLR)](https://en.wikipedia.org/wiki/German_Aerospace_Center) ([Website](https://www.dlr.de/en)) will use AI Studio at least within the scope of three projects and will also contribute to its further development. This is great news.
|
||||||
|
|
||||||
|
|
||||||
Features we have recently released:
|
Features we have recently released:
|
||||||
|
|
||||||
|
- v0.9.40: Added support for the `o4` models from OpenAI. Also, we added Alibaba Cloud & Hugging Face as LLM providers.
|
||||||
|
- v0.9.39: Added the plugin system as a preview feature.
|
||||||
|
- v0.9.31: Added Helmholtz & GWDG as LLM providers. This is a huge improvement for many researchers out there who can use these providers for free. We added DeepSeek as a provider as well.
|
||||||
|
- v0.9.29: Added agents to support the RAG process (selecting the best data sources & validating retrieved data as part of the augmentation process)
|
||||||
|
- v0.9.26+: Added RAG for external data sources using our [ERI interface](https://mindworkai.org/#eri---external-retrieval-interface) as a preview feature.
|
||||||
- v0.9.25: Added [xAI](https://x.ai/) as a new provider. xAI provides their Grok models for generating content.
|
- v0.9.25: Added [xAI](https://x.ai/) as a new provider. xAI provides their Grok models for generating content.
|
||||||
- v0.9.23: Added support for OpenAI `o` models (`o1`, `o1-mini`, `o3`, etc.); added also an [ERI](https://github.com/MindWorkAI/ERI) server coding assistant as a preview feature behind the RAG feature flag. Your own ERI server can be used to gain access to, e.g., your enterprise data from within AI Studio.
|
- v0.9.23: Added support for OpenAI `o` models (`o1`, `o1-mini`, `o3`, etc.); added also an [ERI](https://github.com/MindWorkAI/ERI) server coding assistant as a preview feature behind the RAG feature flag. Your own ERI server can be used to gain access to, e.g., your enterprise data from within AI Studio.
|
||||||
- v0.9.22: Added options for preview features; added embedding provider configuration for RAG (preview) and writer mode (experimental preview).
|
- v0.9.22: Added options for preview features; added embedding provider configuration for RAG (preview) and writer mode (experimental preview).
|
||||||
- v0.9.18: Added the new Anthropic Heiku model; added Groq and Google Gemini as provider options.
|
- v0.9.18: Added the new Anthropic Heiku model; added Groq and Google Gemini as provider options.
|
||||||
- v0.9.17: Added the new Anthropic model `claude-3-5-sonnet-20241022`.
|
|
||||||
- v0.9.16: Added workspace display options & improved the layout of the app window.
|
|
||||||
- v0.9.15: Added the bias-of-the-day assistant. Tells you about a cognitive bias every day.
|
|
||||||
- v0.9.13: You can use `ollama` providers secured with API keys.
|
|
||||||
|
|
||||||
## What is AI Studio?
|
## What is AI Studio?
|
||||||
|
|
||||||
@ -56,7 +68,19 @@ MindWork AI Studio is a free desktop app for macOS, Windows, and Linux. It provi
|
|||||||
|
|
||||||
**Key advantages:**
|
**Key advantages:**
|
||||||
- **Free of charge**: The app is free to use, both for personal and commercial purposes.
|
- **Free of charge**: The app is free to use, both for personal and commercial purposes.
|
||||||
- **Independence**: You are not tied to any single provider. Instead, you can choose the provider that best suits their needs. Right now, we support OpenAI (GPT4o, o1, etc.), Mistral, Anthropic (Claude), Google Gemini, xAI (Grok), DeepSeek, and self-hosted models using [llama.cpp](https://github.com/ggerganov/llama.cpp), [ollama](https://github.com/ollama/ollama), [LM Studio](https://lmstudio.ai/), [Groq](https://groq.com/), or [Fireworks](https://fireworks.ai/). For scientists and employees of research institutions, we also support [Helmholtz](https://helmholtz.cloud/services/?serviceID=d7d5c597-a2f6-4bd1-b71e-4d6499d98570) and [GWDG](https://gwdg.de/services/application-services/ai-services/) AI services. These are available through federated logins like eduGAIN to all 18 Helmholtz Centers, the Max Planck Society, most German, and many international universities.
|
- **Independence**: You are not tied to any single provider. Instead, you can choose the providers that best suit your needs. Right now, we support:
|
||||||
|
- [OpenAI](https://openai.com/) (GPT4o, GPT4.1, o1, o3, o4, etc.)
|
||||||
|
- [Mistral](https://mistral.ai/)
|
||||||
|
- [Anthropic](https://www.anthropic.com/) (Claude)
|
||||||
|
- [Google Gemini](https://gemini.google.com)
|
||||||
|
- [xAI](https://x.ai/) (Grok)
|
||||||
|
- [DeepSeek](https://www.deepseek.com/en)
|
||||||
|
- [Alibaba Cloud](https://www.alibabacloud.com) (Qwen)
|
||||||
|
- [Hugging Face](https://huggingface.co/) using their [inference providers](https://huggingface.co/docs/inference-providers/index) such as Cerebras, Nebius, Sambanova, Novita, Hyperbolic, Together AI, Fireworks, Hugging Face
|
||||||
|
- Self-hosted models using [llama.cpp](https://github.com/ggerganov/llama.cpp), [ollama](https://github.com/ollama/ollama), [LM Studio](https://lmstudio.ai/)
|
||||||
|
- [Groq](https://groq.com/)
|
||||||
|
- [Fireworks](https://fireworks.ai/)
|
||||||
|
- For scientists and employees of research institutions, we also support [Helmholtz](https://helmholtz.cloud/services/?serviceID=d7d5c597-a2f6-4bd1-b71e-4d6499d98570) and [GWDG](https://gwdg.de/services/application-services/ai-services/) AI services. These are available through federated logins like eduGAIN to all 18 Helmholtz Centers, the Max Planck Society, most German, and many international universities.
|
||||||
- **Assistants**: You just want to quickly translate a text? AI Studio has so-called assistants for such and other tasks. No prompting is necessary when working with these assistants.
|
- **Assistants**: You just want to quickly translate a text? AI Studio has so-called assistants for such and other tasks. No prompting is necessary when working with these assistants.
|
||||||
- **Unrestricted usage**: Unlike services like ChatGPT, which impose limits after intensive use, MindWork AI Studio offers unlimited usage through the providers API.
|
- **Unrestricted usage**: Unlike services like ChatGPT, which impose limits after intensive use, MindWork AI Studio offers unlimited usage through the providers API.
|
||||||
- **Cost-effective**: You only pay for what you use, which can be cheaper than monthly subscription services like ChatGPT Plus, especially if used infrequently. But beware, here be dragons: For extremely intensive usage, the API costs can be significantly higher. Unfortunately, providers currently do not offer a way to display current costs in the app. Therefore, check your account with the respective provider to see how your costs are developing. When available, use prepaid and set a cost limit.
|
- **Cost-effective**: You only pay for what you use, which can be cheaper than monthly subscription services like ChatGPT Plus, especially if used infrequently. But beware, here be dragons: For extremely intensive usage, the API costs can be significantly higher. Unfortunately, providers currently do not offer a way to display current costs in the app. Therefore, check your account with the respective provider to see how your costs are developing. When available, use prepaid and set a cost limit.
|
||||||
|
@ -65,7 +65,7 @@ public partial class SettingsPanelProviders : SettingsPanelBase
|
|||||||
{ x => x.IsSelfHosted, provider.IsSelfHosted },
|
{ x => x.IsSelfHosted, provider.IsSelfHosted },
|
||||||
{ x => x.IsEditing, true },
|
{ x => x.IsEditing, true },
|
||||||
{ x => x.DataHost, provider.Host },
|
{ x => x.DataHost, provider.Host },
|
||||||
{ x => x.HfInstanceProviderId, provider.HFInstanceProvider },
|
{ x => x.HFInferenceProviderId, provider.HFInferenceProvider },
|
||||||
};
|
};
|
||||||
|
|
||||||
var dialogReference = await this.DialogService.ShowAsync<ProviderDialog>("Edit LLM Provider", dialogParameters, DialogOptions.FULLSCREEN);
|
var dialogReference = await this.DialogService.ShowAsync<ProviderDialog>("Edit LLM Provider", dialogParameters, DialogOptions.FULLSCREEN);
|
||||||
|
@ -61,20 +61,22 @@
|
|||||||
|
|
||||||
@if (this.DataLLMProvider.IsHFInstanceProviderNeeded())
|
@if (this.DataLLMProvider.IsHFInstanceProviderNeeded())
|
||||||
{
|
{
|
||||||
<MudSelect Disabled="@(!this.DataLLMProvider.IsHFInstanceProviderNeeded())" @bind-Value="@this.HfInstanceProviderId" Label="HF Instance Provider" Class="mb-3" OpenIcon="@Icons.Material.Filled.Dns" AdornmentColor="Color.Info" Adornment="Adornment.Start" Validation="@this.providerValidation.ValidatingHFInstanceProvider">
|
<MudSelect Disabled="@(!this.DataLLMProvider.IsHFInstanceProviderNeeded())" @bind-Value="@this.HFInferenceProviderId" Label="HF Inference Provider" Class="mb-3" OpenIcon="@Icons.Material.Filled.Dns" AdornmentColor="Color.Info" Adornment="Adornment.Start" Validation="@this.providerValidation.ValidatingHFInstanceProvider">
|
||||||
@foreach (HFInstanceProvider instanceProvider in Enum.GetValues(typeof(HFInstanceProvider)))
|
@foreach (HFInferenceProvider inferenceProvider in Enum.GetValues(typeof(HFInferenceProvider)))
|
||||||
{
|
{
|
||||||
<MudSelectItem Value="@instanceProvider">@instanceProvider.ToName()</MudSelectItem>
|
<MudSelectItem Value="@inferenceProvider">
|
||||||
|
@inferenceProvider.ToName()
|
||||||
|
</MudSelectItem>
|
||||||
}
|
}
|
||||||
</MudSelect>
|
</MudSelect>
|
||||||
|
|
||||||
<MudJustifiedText Class="mb-3"> Please double-check if your model name matches the curl specifications provided by the instance provider. If it doesn't, you might get a <b>Not Found</b> error when trying to use the model. Here's a <MudLink Href="https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct?inference_api=true&inference_provider=novita&language=sh" Target="_blank">curl example</MudLink>.</MudJustifiedText>
|
<MudJustifiedText Class="mb-3"> Please double-check if your model name matches the curl specifications provided by the inference provider. If it doesn't, you might get a <b>Not Found</b> error when trying to use the model. Here's a <MudLink Href="https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct?inference_api=true&inference_provider=novita&language=sh" Target="_blank">curl example</MudLink>.</MudJustifiedText>
|
||||||
}
|
}
|
||||||
|
|
||||||
<MudStack Row="@true" AlignItems="AlignItems.Center">
|
<MudStack Row="@true" AlignItems="AlignItems.Center">
|
||||||
@if (this.DataLLMProvider.IsLLMModelProvidedManually())
|
@if (this.DataLLMProvider.IsLLMModelProvidedManually())
|
||||||
{
|
{
|
||||||
<MudButton Variant="Variant.Filled" Size="Size.Small" StartIcon="@Icons.Material.Filled.OpenInBrowser" Href="@this.DataLLMProvider.GetModelsOverviewURL(this.HfInstanceProviderId)" Target="_blank">Show available models</MudButton>
|
<MudButton Variant="Variant.Filled" Size="Size.Small" StartIcon="@Icons.Material.Filled.OpenInBrowser" Href="@this.DataLLMProvider.GetModelsOverviewURL(this.HFInferenceProviderId)" Target="_blank">Show available models</MudButton>
|
||||||
<MudTextField
|
<MudTextField
|
||||||
T="string"
|
T="string"
|
||||||
@bind-Text="@this.dataManuallyModel"
|
@bind-Text="@this.dataManuallyModel"
|
||||||
|
@ -52,7 +52,7 @@ public partial class ProviderDialog : ComponentBase, ISecretId
|
|||||||
/// The HFInstanceProvider to use, e.g., CEREBRAS.
|
/// The HFInstanceProvider to use, e.g., CEREBRAS.
|
||||||
/// </summary>
|
/// </summary>
|
||||||
[Parameter]
|
[Parameter]
|
||||||
public HFInstanceProvider HfInstanceProviderId { get; set; } = HFInstanceProvider.NONE;
|
public HFInferenceProvider HFInferenceProviderId { get; set; } = HFInferenceProvider.NONE;
|
||||||
|
|
||||||
/// <summary>
|
/// <summary>
|
||||||
/// Is this provider self-hosted?
|
/// Is this provider self-hosted?
|
||||||
@ -138,7 +138,7 @@ public partial class ProviderDialog : ComponentBase, ISecretId
|
|||||||
IsSelfHosted = this.DataLLMProvider is LLMProviders.SELF_HOSTED,
|
IsSelfHosted = this.DataLLMProvider is LLMProviders.SELF_HOSTED,
|
||||||
Hostname = cleanedHostname.EndsWith('/') ? cleanedHostname[..^1] : cleanedHostname,
|
Hostname = cleanedHostname.EndsWith('/') ? cleanedHostname[..^1] : cleanedHostname,
|
||||||
Host = this.DataHost,
|
Host = this.DataHost,
|
||||||
HFInstanceProvider = this.HfInstanceProviderId,
|
HFInferenceProvider = this.HFInferenceProviderId,
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -39,7 +39,7 @@ public partial class Home : MSGComponentBase
|
|||||||
private static readonly TextItem[] ITEMS_ADVANTAGES =
|
private static readonly TextItem[] ITEMS_ADVANTAGES =
|
||||||
[
|
[
|
||||||
new("Free of charge", "The app is free to use, both for personal and commercial purposes."),
|
new("Free of charge", "The app is free to use, both for personal and commercial purposes."),
|
||||||
new("Independence", "You are not tied to any single provider. Instead, you might choose the provider that best suits your needs. Right now, we support OpenAI (GPT4o, o1, etc.), Mistral, Anthropic (Claude), Google Gemini, xAI (Grok), DeepSeek, and self-hosted models using llama.cpp, ollama, LM Studio, Groq, or Fireworks. For scientists and employees of research institutions, we also support Helmholtz and GWDG AI services. These are available through federated logins like eduGAIN to all 18 Helmholtz Centers, the Max Planck Society, most German, and many international universities."),
|
new("Independence", "You are not tied to any single provider. Instead, you might choose the provider that best suits your needs. Right now, we support OpenAI (GPT4o, o1, etc.), Mistral, Anthropic (Claude), Google Gemini, xAI (Grok), DeepSeek, Alibaba Cloud (Qwen), Hugging Face, and self-hosted models using llama.cpp, ollama, LM Studio, Groq, or Fireworks. For scientists and employees of research institutions, we also support Helmholtz and GWDG AI services. These are available through federated logins like eduGAIN to all 18 Helmholtz Centers, the Max Planck Society, most German, and many international universities."),
|
||||||
new("Assistants", "You just want to quickly translate a text? AI Studio has so-called assistants for such and other tasks. No prompting is necessary when working with these assistants."),
|
new("Assistants", "You just want to quickly translate a text? AI Studio has so-called assistants for such and other tasks. No prompting is necessary when working with these assistants."),
|
||||||
new("Unrestricted usage", "Unlike services like ChatGPT, which impose limits after intensive use, MindWork AI Studio offers unlimited usage through the providers API."),
|
new("Unrestricted usage", "Unlike services like ChatGPT, which impose limits after intensive use, MindWork AI Studio offers unlimited usage through the providers API."),
|
||||||
new("Cost-effective", "You only pay for what you use, which can be cheaper than monthly subscription services like ChatGPT Plus, especially if used infrequently. But beware, here be dragons: For extremely intensive usage, the API costs can be significantly higher. Unfortunately, providers currently do not offer a way to display current costs in the app. Therefore, check your account with the respective provider to see how your costs are developing. When available, use prepaid and set a cost limit."),
|
new("Cost-effective", "You only pay for what you use, which can be cheaper than monthly subscription services like ChatGPT Plus, especially if used infrequently. But beware, here be dragons: For extremely intensive usage, the API costs can be significantly higher. Unfortunately, providers currently do not offer a way to display current costs in the app. Therefore, check your account with the respective provider to see how your costs are developing. When available, use prepaid and set a cost limit."),
|
||||||
|
@ -1,9 +1,9 @@
|
|||||||
namespace AIStudio.Provider.HuggingFace;
|
namespace AIStudio.Provider.HuggingFace;
|
||||||
|
|
||||||
/// <summary>
|
/// <summary>
|
||||||
/// Enum for instance providers that Hugging Face supports.
|
/// Enum for inference providers that Hugging Face supports.
|
||||||
/// </summary>
|
/// </summary>
|
||||||
public enum HFInstanceProvider
|
public enum HFInferenceProvider
|
||||||
{
|
{
|
||||||
NONE,
|
NONE,
|
||||||
|
|
@ -0,0 +1,43 @@
|
|||||||
|
namespace AIStudio.Provider.HuggingFace;
|
||||||
|
|
||||||
|
public static class HFInferenceProviderExtensions
|
||||||
|
{
|
||||||
|
public static string Endpoints(this HFInferenceProvider provider, Model model) => provider switch
|
||||||
|
{
|
||||||
|
HFInferenceProvider.CEREBRAS => "cerebras/v1/",
|
||||||
|
HFInferenceProvider.NEBIUS_AI_STUDIO => "nebius/v1/",
|
||||||
|
HFInferenceProvider.SAMBANOVA => "sambanova/v1/",
|
||||||
|
HFInferenceProvider.NOVITA => "novita/v3/openai/",
|
||||||
|
HFInferenceProvider.HYPERBOLIC => "hyperbolic/v1/",
|
||||||
|
HFInferenceProvider.TOGETHER_AI => "together/v1/",
|
||||||
|
HFInferenceProvider.FIREWORKS => "fireworks-ai/inference/v1/",
|
||||||
|
HFInferenceProvider.HF_INFERENCE_API => $"hf-inference/models/{model.ToString()}/v1/",
|
||||||
|
_ => string.Empty,
|
||||||
|
};
|
||||||
|
|
||||||
|
public static string EndpointsId(this HFInferenceProvider provider) => provider switch
|
||||||
|
{
|
||||||
|
HFInferenceProvider.CEREBRAS => "cerebras",
|
||||||
|
HFInferenceProvider.NEBIUS_AI_STUDIO => "nebius",
|
||||||
|
HFInferenceProvider.SAMBANOVA => "sambanova",
|
||||||
|
HFInferenceProvider.NOVITA => "novita",
|
||||||
|
HFInferenceProvider.HYPERBOLIC => "hyperbolic",
|
||||||
|
HFInferenceProvider.TOGETHER_AI => "together",
|
||||||
|
HFInferenceProvider.FIREWORKS => "fireworks",
|
||||||
|
HFInferenceProvider.HF_INFERENCE_API => "hf-inference",
|
||||||
|
_ => string.Empty,
|
||||||
|
};
|
||||||
|
|
||||||
|
public static string ToName(this HFInferenceProvider provider) => provider switch
|
||||||
|
{
|
||||||
|
HFInferenceProvider.CEREBRAS => "Cerebras",
|
||||||
|
HFInferenceProvider.NEBIUS_AI_STUDIO => "Nebius AI Studio",
|
||||||
|
HFInferenceProvider.SAMBANOVA => "Sambanova",
|
||||||
|
HFInferenceProvider.NOVITA => "Novita",
|
||||||
|
HFInferenceProvider.HYPERBOLIC => "Hyperbolic",
|
||||||
|
HFInferenceProvider.TOGETHER_AI => "Together AI",
|
||||||
|
HFInferenceProvider.FIREWORKS => "Fireworks AI",
|
||||||
|
HFInferenceProvider.HF_INFERENCE_API => "Hugging Face Inference API",
|
||||||
|
_ => string.Empty,
|
||||||
|
};
|
||||||
|
}
|
@ -1,43 +0,0 @@
|
|||||||
namespace AIStudio.Provider.HuggingFace;
|
|
||||||
|
|
||||||
public static class HFInstanceProviderExtensions
|
|
||||||
{
|
|
||||||
public static string Endpoints(this HFInstanceProvider provider, Model model) => provider switch
|
|
||||||
{
|
|
||||||
HFInstanceProvider.CEREBRAS => "cerebras/v1/",
|
|
||||||
HFInstanceProvider.NEBIUS_AI_STUDIO => "nebius/v1/",
|
|
||||||
HFInstanceProvider.SAMBANOVA => "sambanova/v1/",
|
|
||||||
HFInstanceProvider.NOVITA => "novita/v3/openai/",
|
|
||||||
HFInstanceProvider.HYPERBOLIC => "hyperbolic/v1/",
|
|
||||||
HFInstanceProvider.TOGETHER_AI => "together/v1/",
|
|
||||||
HFInstanceProvider.FIREWORKS => "fireworks-ai/inference/v1/",
|
|
||||||
HFInstanceProvider.HF_INFERENCE_API => $"hf-inference/models/{model.ToString()}/v1/",
|
|
||||||
_ => string.Empty,
|
|
||||||
};
|
|
||||||
|
|
||||||
public static string EndpointsId(this HFInstanceProvider provider) => provider switch
|
|
||||||
{
|
|
||||||
HFInstanceProvider.CEREBRAS => "cerebras",
|
|
||||||
HFInstanceProvider.NEBIUS_AI_STUDIO => "nebius",
|
|
||||||
HFInstanceProvider.SAMBANOVA => "sambanova",
|
|
||||||
HFInstanceProvider.NOVITA => "novita",
|
|
||||||
HFInstanceProvider.HYPERBOLIC => "hyperbolic",
|
|
||||||
HFInstanceProvider.TOGETHER_AI => "together",
|
|
||||||
HFInstanceProvider.FIREWORKS => "fireworks",
|
|
||||||
HFInstanceProvider.HF_INFERENCE_API => "hf-inference",
|
|
||||||
_ => string.Empty,
|
|
||||||
};
|
|
||||||
|
|
||||||
public static string ToName(this HFInstanceProvider provider) => provider switch
|
|
||||||
{
|
|
||||||
HFInstanceProvider.CEREBRAS => "Cerebras",
|
|
||||||
HFInstanceProvider.NEBIUS_AI_STUDIO => "Nebius AI Studio",
|
|
||||||
HFInstanceProvider.SAMBANOVA => "Sambanova",
|
|
||||||
HFInstanceProvider.NOVITA => "Novita",
|
|
||||||
HFInstanceProvider.HYPERBOLIC => "Hyperbolic",
|
|
||||||
HFInstanceProvider.TOGETHER_AI => "Together AI",
|
|
||||||
HFInstanceProvider.FIREWORKS => "Fireworks AI",
|
|
||||||
HFInstanceProvider.HF_INFERENCE_API => "Hugging Face Inference API",
|
|
||||||
_ => string.Empty,
|
|
||||||
};
|
|
||||||
}
|
|
@ -11,9 +11,9 @@ namespace AIStudio.Provider.HuggingFace;
|
|||||||
|
|
||||||
public sealed class ProviderHuggingFace : BaseProvider
|
public sealed class ProviderHuggingFace : BaseProvider
|
||||||
{
|
{
|
||||||
public ProviderHuggingFace(ILogger logger, HFInstanceProvider hfProvider, Model model) : base($"https://router.huggingface.co/{hfProvider.Endpoints(model)}", logger)
|
public ProviderHuggingFace(ILogger logger, HFInferenceProvider hfProvider, Model model) : base($"https://router.huggingface.co/{hfProvider.Endpoints(model)}", logger)
|
||||||
{
|
{
|
||||||
logger.LogInformation($"We use the instance provider '{hfProvider}'. Thus we use the base URL 'https://router.huggingface.co/{hfProvider.Endpoints(model)}'.");
|
logger.LogInformation($"We use the inferende provider '{hfProvider}'. Thus we use the base URL 'https://router.huggingface.co/{hfProvider.Endpoints(model)}'.");
|
||||||
}
|
}
|
||||||
|
|
||||||
#region Implementation of IProvider
|
#region Implementation of IProvider
|
||||||
|
@ -135,7 +135,7 @@ public static class LLMProvidersExtensions
|
|||||||
/// <returns>The provider instance.</returns>
|
/// <returns>The provider instance.</returns>
|
||||||
public static IProvider CreateProvider(this AIStudio.Settings.Provider providerSettings, ILogger logger)
|
public static IProvider CreateProvider(this AIStudio.Settings.Provider providerSettings, ILogger logger)
|
||||||
{
|
{
|
||||||
return providerSettings.UsedLLMProvider.CreateProvider(providerSettings.InstanceName, providerSettings.Host, providerSettings.Hostname, providerSettings.Model, providerSettings.HFInstanceProvider ,logger);
|
return providerSettings.UsedLLMProvider.CreateProvider(providerSettings.InstanceName, providerSettings.Host, providerSettings.Hostname, providerSettings.Model, providerSettings.HFInferenceProvider ,logger);
|
||||||
}
|
}
|
||||||
|
|
||||||
/// <summary>
|
/// <summary>
|
||||||
@ -146,10 +146,10 @@ public static class LLMProvidersExtensions
|
|||||||
/// <returns>The provider instance.</returns>
|
/// <returns>The provider instance.</returns>
|
||||||
public static IProvider CreateProvider(this EmbeddingProvider embeddingProviderSettings, ILogger logger)
|
public static IProvider CreateProvider(this EmbeddingProvider embeddingProviderSettings, ILogger logger)
|
||||||
{
|
{
|
||||||
return embeddingProviderSettings.UsedLLMProvider.CreateProvider(embeddingProviderSettings.Name, embeddingProviderSettings.Host, embeddingProviderSettings.Hostname, embeddingProviderSettings.Model, HFInstanceProvider.NONE,logger);
|
return embeddingProviderSettings.UsedLLMProvider.CreateProvider(embeddingProviderSettings.Name, embeddingProviderSettings.Host, embeddingProviderSettings.Hostname, embeddingProviderSettings.Model, HFInferenceProvider.NONE,logger);
|
||||||
}
|
}
|
||||||
|
|
||||||
private static IProvider CreateProvider(this LLMProviders provider, string instanceName, Host host, string hostname, Model model, HFInstanceProvider instanceProvider , ILogger logger)
|
private static IProvider CreateProvider(this LLMProviders provider, string instanceName, Host host, string hostname, Model model, HFInferenceProvider inferenceProvider , ILogger logger)
|
||||||
{
|
{
|
||||||
try
|
try
|
||||||
{
|
{
|
||||||
@ -165,7 +165,7 @@ public static class LLMProvidersExtensions
|
|||||||
|
|
||||||
LLMProviders.GROQ => new ProviderGroq(logger) { InstanceName = instanceName },
|
LLMProviders.GROQ => new ProviderGroq(logger) { InstanceName = instanceName },
|
||||||
LLMProviders.FIREWORKS => new ProviderFireworks(logger) { InstanceName = instanceName },
|
LLMProviders.FIREWORKS => new ProviderFireworks(logger) { InstanceName = instanceName },
|
||||||
LLMProviders.HUGGINGFACE => new ProviderHuggingFace(logger, instanceProvider, model) { InstanceName = instanceName },
|
LLMProviders.HUGGINGFACE => new ProviderHuggingFace(logger, inferenceProvider, model) { InstanceName = instanceName },
|
||||||
|
|
||||||
LLMProviders.SELF_HOSTED => new ProviderSelfHosted(logger, host, hostname) { InstanceName = instanceName },
|
LLMProviders.SELF_HOSTED => new ProviderSelfHosted(logger, host, hostname) { InstanceName = instanceName },
|
||||||
|
|
||||||
@ -234,10 +234,10 @@ public static class LLMProvidersExtensions
|
|||||||
_ => false,
|
_ => false,
|
||||||
};
|
};
|
||||||
|
|
||||||
public static string GetModelsOverviewURL(this LLMProviders provider, HFInstanceProvider instanceProvider) => provider switch
|
public static string GetModelsOverviewURL(this LLMProviders provider, HFInferenceProvider inferenceProvider) => provider switch
|
||||||
{
|
{
|
||||||
LLMProviders.FIREWORKS => "https://fireworks.ai/models?show=Serverless",
|
LLMProviders.FIREWORKS => "https://fireworks.ai/models?show=Serverless",
|
||||||
LLMProviders.HUGGINGFACE => $"https://huggingface.co/models?inference_provider={instanceProvider.EndpointsId()}",
|
LLMProviders.HUGGINGFACE => $"https://huggingface.co/models?inference_provider={inferenceProvider.EndpointsId()}",
|
||||||
_ => string.Empty,
|
_ => string.Empty,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -25,7 +25,7 @@ public readonly record struct Provider(
|
|||||||
bool IsSelfHosted = false,
|
bool IsSelfHosted = false,
|
||||||
string Hostname = "http://localhost:1234",
|
string Hostname = "http://localhost:1234",
|
||||||
Host Host = Host.NONE,
|
Host Host = Host.NONE,
|
||||||
HFInstanceProvider HFInstanceProvider = HFInstanceProvider.NONE) : ISecretId
|
HFInferenceProvider HFInferenceProvider = HFInferenceProvider.NONE) : ISecretId
|
||||||
{
|
{
|
||||||
#region Overrides of ValueType
|
#region Overrides of ValueType
|
||||||
|
|
||||||
|
@ -94,13 +94,13 @@ public sealed class ProviderValidation
|
|||||||
return null;
|
return null;
|
||||||
}
|
}
|
||||||
|
|
||||||
public string? ValidatingHFInstanceProvider(HFInstanceProvider instanceProvider)
|
public string? ValidatingHFInstanceProvider(HFInferenceProvider inferenceProvider)
|
||||||
{
|
{
|
||||||
if(this.GetProvider() is not LLMProviders.HUGGINGFACE)
|
if(this.GetProvider() is not LLMProviders.HUGGINGFACE)
|
||||||
return null;
|
return null;
|
||||||
|
|
||||||
if (instanceProvider is HFInstanceProvider.NONE)
|
if (inferenceProvider is HFInferenceProvider.NONE)
|
||||||
return "Please select an Hugging Face instance provider.";
|
return "Please select an Hugging Face inference provider.";
|
||||||
|
|
||||||
return null;
|
return null;
|
||||||
}
|
}
|
||||||
|
@ -2,6 +2,7 @@
|
|||||||
- Added support for the announced OpenAI `o4` models.
|
- Added support for the announced OpenAI `o4` models.
|
||||||
- Added Alibaba Cloud as a new provider. Thanks Peer `peerschuett` for the contribution.
|
- Added Alibaba Cloud as a new provider. Thanks Peer `peerschuett` for the contribution.
|
||||||
- Added the Hugging Face inference provider as an LLM provider to AI Studio. Thanks Peer `peerschuett` for the contribution.
|
- Added the Hugging Face inference provider as an LLM provider to AI Studio. Thanks Peer `peerschuett` for the contribution.
|
||||||
|
- Added the current CPU architecture to the about page. This information helps us identify issues.
|
||||||
- Improved the LLM & embedding provider dialogs by hiding not relevant options.
|
- Improved the LLM & embedding provider dialogs by hiding not relevant options.
|
||||||
- Improved the provider selection by showing the name of the provider in the provider selection instead of its identifier.
|
- Improved the provider selection by showing the name of the provider in the provider selection instead of its identifier.
|
||||||
- Improved the developer experience by adding a tolerant enum converter for better configuration handling.
|
- Improved the developer experience by adding a tolerant enum converter for better configuration handling.
|
||||||
|
@ -14,7 +14,7 @@ AI Studio is only available for modern 64-bit Windows systems. When you have an
|
|||||||
|
|
||||||
- **Intel/AMD:** In almost all other cases, you have an Intel/AMD system. [Download the x64 version](https://github.com/MindWorkAI/AI-Studio/releases/latest/download/MindWork.AI.Studio_x64-setup.exe) of AI Studio.
|
- **Intel/AMD:** In almost all other cases, you have an Intel/AMD system. [Download the x64 version](https://github.com/MindWorkAI/AI-Studio/releases/latest/download/MindWork.AI.Studio_x64-setup.exe) of AI Studio.
|
||||||
|
|
||||||
When you try to install the app, you get a message regarding protection of your PC (see screenshots below). For Windows to trust our app, we need to purchase a certificate that costs around $1000 per year. Would you like to help us with this? [Please consider supporting us](https://github.com/sponsors/MindWorkAI). You might want to [visit our release page](https://github.com/MindWorkAI/AI-Studio/releases/latest). There, we provide VirusTotal scan results for each release. If you are unsure about the safety of the app, you can check the results there. Ensure that the majority of scanners have a green checkmark.
|
When you try to install the app, you get a message regarding protection of your PC (see screenshots below). For Windows to trust our app, we need to purchase a certificate that [costs around $1000 per year](https://github.com/MindWorkAI/Planning/issues/56). Would you like to help us with this? [Please consider supporting us](https://github.com/sponsors/MindWorkAI). You might want to [visit our release page](https://github.com/MindWorkAI/AI-Studio/releases/latest). There, we provide VirusTotal scan results for each release. If you are unsure about the safety of the app, you can check the results there. Ensure that the majority of scanners have a green checkmark.
|
||||||
|
|
||||||
When you are confident in the app's safety, click on "More info" and then "Run anyway" to proceed with the installation:
|
When you are confident in the app's safety, click on "More info" and then "Run anyway" to proceed with the installation:
|
||||||
|
|
||||||
@ -43,7 +43,7 @@ When you try to open the app, you get a message that the app is damaged:
|
|||||||
|
|
||||||

|

|
||||||
|
|
||||||
This is because we don't have an Apple Developer account, which costs around $100 per year. Would you like to help us with this? [Please consider supporting us](https://github.com/sponsors/MindWorkAI). You might want to [visit our release page](https://github.com/MindWorkAI/AI-Studio/releases/latest). There, we provide VirusTotal scan results for each release. If you are unsure about the safety of the app, you can check the results there. Ensure that the majority of scanners have a green checkmark.
|
This is because we don't have an Apple Developer account, [which costs around $100 per year](https://github.com/MindWorkAI/Planning/issues/56). Would you like to help us with this? [Please consider supporting us](https://github.com/sponsors/MindWorkAI). You might want to [visit our release page](https://github.com/MindWorkAI/AI-Studio/releases/latest). There, we provide VirusTotal scan results for each release. If you are unsure about the safety of the app, you can check the results there. Ensure that the majority of scanners have a green checkmark.
|
||||||
|
|
||||||
When you are confident in the app's safety, follow these steps:
|
When you are confident in the app's safety, follow these steps:
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user