What is AnythingLLM?
AnythingLLM allows you to build a fully private chatbot by using any LLM, embedding model, or vector database.
Self-hosting AnythingLLM on AWS allows you to share an AnythingLLM instance with members of your team. Additionally, by self-hosting you can access AnythingLLM from the browser or publish chat widgets to the internet.
Features
- Use context from any type of document
- Run any model you want including locally run open-source models or enterprise models from OpenAI, Azure, AWS, and Google.
- Control your organization‘s privacy - nothing is shared unless you allow it.
Getting Started
After successfully deploying AnythingLLM on AWS with FlexStack, you have full control over your AI stack. To use models running locally, LMStudio or Ollama are great options. A tunneling service like ngrok
can be used to make a locally running model accessible by the deployed AnythingLLM service.
AnythingLLM also works great with paid options from the top providers. Simply navigate to the cloudfront service URL or add a domain to have a custom endpoint. From there, all configuration entered into the AnythingLLM onboarding workflow will be saved and persisted via a network file system mount on the application containers.