@attribute [Route(Routes.ASSISTANT_ERI)]
@using MudExtensions
@inherits AssistantBaseCore
You can imagine it like this: Hypothetically, when Wikipedia implemented the ERI, it would vectorize
all pages using an embedding method. All of Wikipedia’s data would remain with Wikipedia, including the
vector database (decentralized approach). Then, any AI Studio user could add Wikipedia as a data source to
significantly reduce the hallucination of the LLM in knowledge questions.
Related links:
ERI repository with example implementation in .NET and C#
Interactive documentation aka Swagger UI
Auto Save
The ERI specification will change over time. You probably want to keep your ERI server up to date. This means you might want to
regenerate the code for your ERI server. To avoid having to make all inputs each time, all your inputs and decisions can be
automatically saved. Would you like this?
Common ERI server settings
@foreach (var language in Enum.GetValues())
{
@language.Name()
}
@if (this.selectedProgrammingLanguage is ProgrammingLanguages.OTHER)
{
}
@foreach (var version in Enum.GetValues())
{
@version
}
Download specification
Data source settings
@foreach (var dataSource in Enum.GetValues())
{
@dataSource.Name()
}
@if (this.selectedDataSource is DataSources.CUSTOM)
{
}
@if(this.selectedDataSource > DataSources.FILE_SYSTEM)
{
}
@if (this.NeedHostnamePort())
{
@if (this.dataSourcePort < 1024)
{
Warning: Ports below 1024 are reserved for system services. Your ERI server need to run with elevated permissions (root user).
}
}
Authentication settings
@foreach (var authMethod in Enum.GetValues())
{
@authMethod.Name()
}
@if (this.selectedAuthenticationMethods.Contains(Auth.KERBEROS))
{
@foreach (var os in Enum.GetValues())
{
@os.Name()
}
}
Data protection settings
@foreach (var option in Enum.GetValues())
{
@option.Name()
}
Embedding settings
You will likely use one or more embedding methods to encode the meaning of your data into a typically high-dimensional vector
space. In this case, you will use a vector database to store and search these vectors (called embeddings). However, you don't
have to use embedding methods. When your retrieval method works without any embedding, you can ignore this section. An example: You
store files on a file server, and your retrieval method works exclusively with file names in the file system, so you don't
need embeddings.
You can specify more than one embedding method. This can be useful when you want to use different embeddings for different queries
or data types. For example, one embedding for texts, another for images, and a third for videos, etc.
Name
Type
Actions
@context.EmbeddingName
@context.EmbeddingType
Edit
Delete
@if(this.embeddings.Count == 0)
{
No embedding methods configured yet.
}
Add Embedding Method
Data retrieval settings
You can integrate additional libraries. Perhaps you want to evaluate the prompts in advance using a machine learning method or analyze them with a text
mining approach? Or maybe you want to preprocess images in the prompts? For such advanced scenarios, you can specify which libraries you want to use here.
It's best to describe which library you want to integrate for which purpose. This way, the LLM that writes the ERI server for you can try to use these
libraries effectively. This should result in less rework being necessary. If you don't know the necessary libraries, you can instead attempt to describe
the intended use. The LLM can then attempt to choose suitable libraries. However, hallucinations can occur, and fictional libraries might be selected.
Provider selection for generation
The task of writing the ERI server for you is very complex. Therefore, a very powerful LLM is needed to successfully accomplish this task.
Small local models will probably not be sufficient. Instead, try using a large cloud-based or a large self-hosted model.