ArtyLLaMA: Empowering AI Creativity in the Open Source Community 🦙🎨
ArtyLLaMA is an experimental chat interface for Open Source Large Language Models, leveraging the power of Ollama, OpenAI, and Anthropic. It features dynamic content generation and display through an "Artifacts-like" system, making AI-assisted creativity more accessible and interactive.
Project Description
ArtyLLaMA is not a model itself, but a framework that allows users to interact with various language models. It provides a user-friendly interface for generating creative content, code, and visualizations using state-of-the-art language models.
Key Features:
- 🦙 Multi-Provider Integration: Seamless support for Ollama, OpenAI, and Anthropic models
- 🎨 Dynamic Artifact Generation: Create and display content artifacts during chat interactions
- 🖥️ Real-time HTML Preview: Instantly visualize HTML artifacts with interactive canvas
- 🔄 Multi-Model Support: Choose from multiple language models across providers
- 📱 Responsive Design: Mobile-friendly interface built with Tailwind CSS
- 🌙 Dark Mode: Easy on the eyes with a default dark theme
- 🚀 Local Inference: Run models locally for privacy and customization
- 🖋️ Code Syntax Highlighting: Enhanced readability for various programming languages
- 🎭 SVG Rendering Support: Display AI-created vector graphics
- 🌐 3D Visualization: Utilize Three.js for 3D visualizations and simulations
- 🔐 User Authentication: JWT-based system for user registration and login
- 📚 Personalized Chat History: Store and retrieve messages based on user ID
- 🔍 Semantic Search: Cross-model semantic search capabilities in chat history
- 🔀 Dynamic Embedding Collections: Support for multiple embedding models with automatic collection creation
Intended Use
ArtyLLaMA is designed for developers, researchers, and creative professionals who want to:
- Explore the capabilities of various language models
- Generate and iterate on creative content, including code, designs, and written text
- Prototype AI-assisted applications and workflows
- Experiment with local and cloud-based AI inference
Limitations
- Local setup requires installation of Ollama for certain features
- Performance depends on the user's hardware capabilities or chosen cloud provider
- Does not include built-in content moderation (users should implement their own safeguards)
Ethical Considerations
Users of ArtyLLaMA should be aware of:
- Potential biases present in the underlying language models
- The need for responsible use and content generation
- Privacy implications of using AI-generated content and storing chat history
Technical Specifications
- Frontend: React-based with Tailwind CSS
- Backend: Node.js with Express.js
- Required Libraries: React, Express.js, Tailwind CSS, Three.js, and others (see package.json)
- Supported Model Formats: Those supported by Ollama, OpenAI, and Anthropic
- Hardware Requirements: Varies based on the chosen model and deployment method
Getting Started
- Clone the repository:
git clone https://github.com/kroonen/ArtyLLaMA.git
- Install dependencies:
npm install
- Set up environment variables (see README for details on API keys)
- Run the application:
npm run dev
- Access the interface at
http://localhost:3000
For more detailed instructions, including Docker setup, visit our GitHub repository.
License
ArtyLLaMA is distributed under the ArtyLLaMa Research Project License. This license allows free use for non-commercial, academic, and research purposes with attribution. Commercial use requires explicit written permission. See the LICENSE file for full details.
Citation
If you use ArtyLLaMA in your research or projects, please cite it as follows:
@software{artyllama2024,
author = {Robin Kroonen},
title = {ArtyLLaMA: Empowering AI Creativity in the Open Source Community},
year = {2024},
url = {https://github.com/kroonen}
}
Contact
For questions, feedback, or collaborations, please reach out to:
We welcome contributions and feedback from the community, subject to the terms of our license!