AI Writing Assistant.
Configure AI-powered text assistance for Partners using OpenAI, DeepSeek, or local models.
The AI Writing Assistant helps Partners create and improve text content directly within the dashboard. When enabled, an AI button appears in supported text fields, offering quick transformations like improving grammar, adjusting tone, or generating content.
What It Does
| Feature | Description |
|---|---|
| Text Improvement | Refine grammar, spelling, and clarity |
| Tone Adjustments | Make text more formal, casual, or friendly |
| Content Generation | Generate descriptions, summaries, and copy |
| Quick Transformations | One-click formatting and style changes |
Supported Providers
The system uses an OpenAI-compatible API format, supporting multiple providers:
| Provider | Description | Cost |
|---|---|---|
| OpenAI | GPT models from OpenAI | Pay-per-use API |
| DeepSeek | Alternative AI with competitive pricing | Pay-per-use API |
| LM Studio | Run open-source models locally | Free (local hardware) |
All providers use the same configuration variables, making it easy to switch between them.
Configuration
Enable AI features by adding the following to your .env file:
OpenAI
OPENAI_ENABLED=true
OPENAI_API_KEY="your-api-key"
OPENAI_MODEL="gpt-5-nano"
OPENAI_BASE_URL="https://api.openai.com/v1"
DeepSeek
OPENAI_ENABLED=true
OPENAI_API_KEY="your-deepseek-key"
OPENAI_MODEL="deepseek-chat"
OPENAI_BASE_URL="https://api.deepseek.com"
OPENAI_API_URL="https://api.deepseek.com"
LM Studio (Local)
OPENAI_ENABLED=true
OPENAI_API_KEY="lm-studio"
OPENAI_BASE_URL="http://localhost:1234/v1"
After updating your .env file, clear the config cache:
php artisan config:clear
OpenAI Setup
To use OpenAI's GPT models:
- Visit platform.openai.com and create an account
- Navigate to API Keys in your dashboard
- Click Create new secret key and name it for reference
- Copy the key immediately—it won't be shown again
- Add the key to your
.envfile asOPENAI_API_KEY
Recommended Models
OpenAI's GPT-5 family provides the best balance of quality and cost. For the AI Writing Assistant:
| Model | Best For | Context Window |
|---|---|---|
gpt-5-nano |
Most use cases—fastest and most affordable | 400K tokens |
gpt-5-mini |
Complex writing requiring more reasoning | 400K tokens |
gpt-4.1-mini |
Non-reasoning tasks, fast responses | 1M tokens |
Note: Legacy models (gpt-3.5-turbo, gpt-4o, gpt-4o-mini) still work but are not recommended. Use gpt-5-nano for the best price-to-performance ratio.
For the latest models and pricing, see OpenAI's model documentation.
DeepSeek Setup
DeepSeek provides an OpenAI-compatible API with competitive pricing:
- Create an account at platform.deepseek.com
- Navigate to your API dashboard
- Generate an API key
- Add the DeepSeek configuration to your
.envfile
DeepSeek Models
| Model | Description |
|---|---|
deepseek-chat |
General-purpose conversational model |
deepseek-coder |
Optimized for code-related tasks |
LM Studio Setup
LM Studio lets you run open-source AI models locally—ideal for development, testing, or privacy-focused deployments.
Installation
- Download LM Studio from lmstudio.ai
- Install and launch the application
- Search for and download a model (e.g., Llama 3.2, Mistral, or Qwen)
Starting the Local Server
- Open LM Studio
- Go to the Local Server tab
- Select your downloaded model
- Click Start Server
- The API will be available at
http://localhost:1234/v1
Recommended Local Models
For text writing assistance, consider:
- Llama 3.2 8B — Good balance of speed and quality
- Mistral 7B — Fast and efficient
- Qwen 2.5 7B — Strong multilingual support
Model quality depends on your hardware. Start with smaller models (7B–8B parameters) if you have limited RAM.
Troubleshooting
Connection Errors
| Issue | Solution |
|---|---|
| "API key invalid" | Verify the key is copied correctly with no extra spaces |
| "Connection refused" | Check network connectivity or firewall settings |
| "Model not found" | Confirm the model name matches your provider's documentation |
LM Studio Issues
| Issue | Solution |
|---|---|
| Connection refused | Ensure the local server is running |
| Port already in use | Check if another service uses port 1234 |
| Slow responses | Try a smaller model or check system resources |
| Server crashes | Reduce context length or use a quantized model |
General Tips
- Clear cache after any
.envchanges:php artisan config:clear - Check logs in
storage/logs/laravel.logfor error details - Test the API with a curl command before troubleshooting the app
Related Topics
- Languages & Translations — Multi-language support
- Email Configuration — Another
.envconfiguration - Partners Overview — Partner dashboard features