Skip to main content
Of course! This is an excellent and robust architecture for building a powerful, AI-driven enterprise application. Using Laravel for the core application logic, Vue for a dynamic front-end, and n8n as a decoupled workflow/AI engine is a modern and scalable approach. Your initial idea to use n8n webhooks as API endpoints is the right way to go. Let’s break down the implementation step-by-step, covering the integration, the database strategy, and the real-time chat interface.

High-Level Architecture

Here’s a visual representation of the flow we’re aiming for:
+-----------+      (1) User sends message      +-----------------+      (2) POST request with message     +----------------+
|           | -------------------------------> |                 | -------------------------------------> |                |
| Vue.js    |      (7) AI response streams     | Laravel Backend |      (3) Calls n8n Webhook             | n8n Workflow   |
| Chat UI   | <------------------------------- | (API, SSE/WS)   |                                        | (AI Agent)     |
|           |                                  |                 | <------------------------------------  |                |
+-----------+                                  +-------+---------+      (5) Writes response back          +-------+--------+
                                                       |                                                        |
                                                       | (4) & (6) Read/Write Conversation State                  |
                                                       |                                                        |
                                                       +----------------------+---------------------------------+
                                                                              |
                                                                    +---------+---------+
                                                                    |   MongoDB         |
                                                                    | (Shared Database) |
                                                                    +-------------------+

Part 1: Laravel & n8n Integration

Your plan to use a webhook is perfect. This decouples your systems: Laravel doesn’t need to know how the AI agent works, only where to send the data and how to get a response. 1. The n8n Workflow (The AI Agent) First, create your workflow in n8n.
  • Trigger Node: Webhook
  • This node generates a unique URL. This is the API endpoint your Laravel application will call.
  • It will automatically parse incoming JSON data (e.g., { "userId": 123, "conversationId": "xyz", "message": "Hello AI" }).
  • MongoDB Node: Get Conversation History
  • Use the conversationId from the webhook data to query your MongoDB conversations collection.
  • Fetch the last 5-10 messages to provide context to the AI.
  • AI Agent / LLM Node:
  • Use one of the built-in AI nodes (OpenAI, Hugging Face, Cohere, etc.).
  • Construct the prompt using the user’s new message and the conversation history you just fetched.
  • Execute the AI call. Many nodes support streaming.
  • MongoDB Node: Save AI Response
  • Take the response from the AI node.
  • Update the conversations collection in MongoDB, adding the AI’s message to the history.
  • (Optional) HTTP Request Node: Callback to Laravel
  • If you want n8n to proactively tell Laravel “I’m done,” you can add a final node that calls another API endpoint on your Laravel app. This is useful for fully asynchronous flows.
2. Laravel’s Role (The Orchestrator) Laravel will handle user authentication, manage the conversation state, and communicate with n8n. Create a Service or Action Class in Laravel: This encapsulates the logic for talking to n8n.
// app/Services/N8nAgentService.php
namespace App\Services;

use Illuminate\Support\Facades\Http;
use App\Models\Conversation; // Assuming you use an Eloquent model for Mongo

class N8nAgentService
{
    public function sendMessage(string $conversationId, string $message, int $userId)
    {
        // 1. Prepare the payload for n8n
        $payload = [
            'conversationId' => $conversationId,
            'message' => $message,
            'userId' => $userId,
        ];

        // 2. Call the n8n webhook
        // Store the webhook URL in your .env file for security
        $n8nWebhookUrl = config('services.n8n.webhook_url');

        // Use timeout to prevent long-waiting requests if n8n is slow
        $response = Http::timeout(30)->post($n8nWebhookUrl, $payload);

        if ($response->failed()) {
            // Handle error: n8n might be down or the workflow failed
            throw new \Exception('Failed to communicate with the AI agent.');
        }

        // This assumes n8n workflow is set to "Respond to Webhook"
        // and returns the final AI response directly.
        return $response->json();
    }
}
Secure Your n8n Webhook: In an enterprise app, you can’t have an open webhook.
  • Method 1: API Key: Add a static API key to the n8n webhook’s header authentication and send it from Laravel.
Http::withHeaders([
    'X-API-KEY' => config('services.n8n.api_key'),
])->post(...);
  • Method 2: Signed URLs (More Secure): Laravel can generate a temporary, signed URL that n8n can validate. This is more complex but more secure.

Part 2: Shared Database Strategy (MongoDB)

Using MongoDB as a shared state manager is a great idea. Suggested Schema for a conversations collection:
{
  "_id": "65f1b...",
  "conversationId": "c_12345_abcde", // A unique ID you generate in Laravel
  "userId": 123,
  "title": "Discussion about Q4 Report",
  "createdAt": "ISODate(...)",
  "updatedAt": "ISODate(...)",
  "messages": [
    {
      "role": "user",
      "content": "Hello, can you summarize the Q4 financial report?",
      "timestamp": "ISODate(...)"
    },
    {
      "role": "assistant",
      "content": "Of course! The Q4 report shows a 15% increase in revenue...",
      "timestamp": "ISODate(...)"
    }
  ]
}
  • Laravel: Uses an Eloquent model (like jenssegers/laravel-mongodb) to create, read, and append messages.
  • n8n: Uses its built-in MongoDB node to read the messages array for context and append the new AI response. This keeps the full conversation history in one place.

Part 3: Vue.js Chat Interface (SSE vs. PrismPHP)

This is the crucial part for user experience. You want to show the AI response as it’s being generated (the “typing” effect). This requires streaming. SSE is a simple, efficient, and native browser technology for one-way communication from server to client. It’s perfect for streaming AI responses. How it works:
  1. Vue.js makes a single API request to a Laravel endpoint.
  2. This connection stays open.
  3. Laravel calls n8n.
  4. n8n starts its process. As the AI model generates tokens (words), n8n can use its streaming capabilities to send these tokens back to Laravel (e.g., via a second “streaming” webhook call).
  5. Laravel receives these tokens and immediately pushes them down the open SSE connection to Vue.
Implementation: Laravel Controller:
// app/Http/Controllers/ChatController.php
use Symfony\Component\HttpFoundation\StreamedResponse;

public function stream(Request $request)
{
    $conversationId = $request->input('conversationId');
    $message = $request->input('message');

    // 1. Save user message to DB
    // ...

    // 2. This response object keeps the connection open
    $response = new StreamedResponse(function() use ($conversationId, $message) {

        // 3. Call your n8n workflow.
        // IMPORTANT: The n8n workflow should be configured to stream its response back.
        // This could be via a webhook n8n calls on another Laravel endpoint,
        // which then dispatches an event that this closure listens for.

        // A simpler (but less robust) way is to have n8n's final response be the full text,
        // and then stream it word-by-word from Laravel.
        $fullResponse = app(N8nAgentService::class)->sendMessage($conversationId, $message, auth()->id());
        $words = explode(' ', $fullResponse['text']);

        foreach ($words as $word) {
            // "data:" is the required SSE format
            echo "data: " . json_encode(['token' => $word . ' ']) . "\n\n";
            ob_flush();
            flush();
            usleep(50000); // 50ms delay
        }

        // Send a final message to signal the end
        echo "data: " . json_encode(['event' => 'done']) . "\n\n";
        ob_flush();
        flush();
    });

    $response->headers->set('Content-Type', 'text/event-stream');
    $response->headers->set('Cache-Control', 'no-cache');
    $response->headers->set('Connection', 'keep-alive');

    return $response;
}
Vue.js Component:
// ChatComponent.vue
<script setup>
import { ref } from 'vue';

const messages = ref([]);
const currentAiResponse = ref('');

function sendMessage(userMessage) {
  messages.value.push({ role: 'user', content: userMessage });

  // Use the native EventSource API
  const eventSource = new EventSource(`/api/chat/stream?message=${encodeURIComponent(userMessage)}`);

  let aiMessageIndex = messages.value.length;
  messages.value.push({ role: 'assistant', content: '' });

  eventSource.onmessage = (event) => {
    const data = JSON.parse(event.data);

    if (data.event === 'done') {
      eventSource.close(); // We're done, close the connection
      return;
    }

    // Append the token to the current AI message
    messages.value[aiMessageIndex].content += data.token;
  };

  eventSource.onerror = (err) => {
    console.error("EventSource failed:", err);
    eventSource.close();
  };
}
</script>

Option 2: PrismPHP (or WebSockets via Laravel Reverb/Soketi)

PrismPHP is a high-level library that uses WebSockets (via Laravel Reverb or Soketi) to provide real-time features. When to use PrismPHP/WebSockets:
  • You need bi-directional communication (e.g., server needs to push alerts to the client without a prior request).
  • You need features like “user is typing…” indicators, online presence, or multi-user collaboration.
  • You are already using WebSockets for other features in your app.
How it would work:
  1. Setup: You’d need to install and configure Laravel Reverb (or Soketi) and Laravel Echo on the front end.
  2. Vue.js: The client connects to a WebSocket channel (e.g., private-chat.${userId}).
  3. Laravel: When the user sends a message, Laravel calls the n8n webhook.
  4. n8n -> Laravel Callback: The n8n workflow, upon receiving tokens from the AI, calls a callback API endpoint on Laravel.
  5. Laravel Event: This callback endpoint dispatches a Laravel event (e.g., AiTokenReceived).
  6. Broadcast: This event is broadcasted over the WebSocket channel.
  7. Vue.js Listener: Laravel Echo listens for this event on the front end and updates the UI.
This is more powerful but also significantly more complex to set up (managing a WebSocket server, broadcasting, events, etc.).

Recommendation: SSE vs. PrismPHP

For your specific use case of an AI chat interface, start with SSE.
  • Simplicity: It’s much easier to implement, requires no extra server daemons (like Reverb), and is highly efficient for this exact task.
  • Sufficiency: It perfectly solves the problem of streaming text from the server to the client.
If your application later evolves to require features like multi-user chat rooms, live collaboration, or real-time notifications completely unrelated to the user’s current action, then migrating to or adding WebSockets with a library like PrismPHP would be a logical next step.

Chatbox Implementation

Here is the visualization of the chat interface, the Vue 2 implementation, and how to handle advanced features like interactivity and multimodal uploads.

1. Text-Based Wireframe of the Chat Interface

This is a standard layout for an embedded enterprise chat agent, often placed in a drawer, a modal, or a dedicated page.
+----------------------------------------------------------------------+
|  Enterprise AI Assistant                                     [X] Close |
+----------------------------------------------------------------------+
|                                                                      |
|  [Bot Icon]                                                          |
|  AI Agent                                                            |
|  Hello! I am connected to the company database and workflows.        |
|  How can I assist you with the application today?                    |
|  10:00 AM                                                            |
|                                                                      |
|                                                      [User Avatar]   |
|                                                           You        |
|                       Can you pull up the Q3 sales report and        |
|                       summarize the key findings for region East?    |
|                                                           10:01 AM   |
|                                                                      |
|  [Bot Icon]                                                          |
|  AI Agent                                                            |
|  Certainly. I'm querying the Q3 data now...                          |
|                                                                      |
|  Here is a summary for the East Region:                              |
|  * **Total Revenue:** $1.2M (+12% YoY)                               |
|  * **Top Product:** Enterprise License X                             |
|                                                                      |
|  ```json                                                             |
|  { "data_point": "example_code_block" }                              |
|  ```                                                                 |
|  _Would you like me to trigger the approval workflow for this?_      |
|  10:02 AM                                                            |
|                                                                      |
|                                                                      |
| . . . . . . . . . . . . (Scrollable Area) . . . . . . . . . . . . . .|
+----------------------------------------------------------------------+
| [📎] [ Type your message here...                          ]  [ ➤ Send ]|
+----------------------------------------------------------------------+
   ^                                                           ^
Attachment Btn                                             Send Btn
(For images/files)

2. Vue 2 Implementation (Options API)

Here is how you implement the streaming chat interface using Vue 2 and the Options API. Prerequisites: You will likely want a library to render Markdown, as the AI will often return formatted text (bold, lists, code blocks). marked is a popular choice. npm install marked
<!-- resources/js/components/ChatInterface.vue -->
<template>
    <div class="chat-container">
        <div class="chat-messages" ref="messagesContainer">
            <div
                v-for="(msg, index) in messages"
                :key="index"
                :class="['message', msg.role === 'user' ? 'message-user' : 'message-ai']"
            >
                <div class="message-content">
                    <!-- Use v-html to render the markdown parsed by 'marked' -->
                    <!-- NOTE: Sanitize this in production if you don't trust the AI output -->
                    <div v-if="msg.role === 'assistant'" v-html="renderMarkdown(msg.content)"></div>
                    <div v-else>{{ msg.content }}</div>

                     <!-- Display the uploaded image if it exists -->
                    <img v-if="msg.imageUrl" :src="msg.imageUrl" class="message-image" />
                </div>
            </div>
             <div v-if="isTyping" class="message message-ai">
                 <span class="typing-indicator">AI is typing...</span>
             </div>
        </div>

        <div class="chat-input">
             <!-- Hidden file input for multimodal uploads -->
            <input
                type="file"
                ref="fileInput"
                @change="handleFileUpload"
                accept="image/*"
                style="display: none;"
            />
            <button type="button" @click="$refs.fileInput.click()" class="btn-attach">
                📎
            </button>

            <input
                v-model="newMessage"
                @keyup.enter="sendMessage"
                type="text"
                placeholder="Type your message..."
                :disabled="isTyping"
            />
            <button @click="sendMessage" :disabled="isTyping || !newMessage">Send</button>
        </div>
         <div v-if="tempImageUrl" class="image-preview">
             Attached: {{ tempImageName }}
             <button @click="clearAttachment">x</button>
        </div>
    </div>
</template>

<script>
import { marked } from 'marked';

export default {
    name: 'ChatInterface',
    data() {
        return {
            messages: [],
            newMessage: '',
            isTyping: false,
            eventSource: null,
            tempImageUrl: null, // URL of the image *after* uploading to Laravel
            tempImageName: null
        };
    },
    methods: {
        renderMarkdown(text) {
            return marked.parse(text);
        },
        scrollToBottom() {
             this.$nextTick(() => {
                const container = this.$refs.messagesContainer;
                container.scrollTop = container.scrollHeight;
            });
        },
        async handleFileUpload(event) {
            const file = event.target.files[0];
            if (!file) return;

            this.tempImageName = file.name;

            const formData = new FormData();
            formData.append('image', file);

            try {
                 // 1. Upload the image to Laravel FIRST
                 const response = await axios.post('/api/chat/upload', formData, {
                    headers: { 'Content-Type': 'multipart/form-data' }
                });
                // 2. Laravel saves it (e.g., to S3 or public disk) and returns the URL
                this.tempImageUrl = response.data.url;
            } catch (error) {
                alert('Failed to upload image');
                this.clearAttachment();
            }
        },
        clearAttachment() {
             this.tempImageUrl = null;
             this.tempImageName = null;
             this.$refs.fileInput.value = '';
        },
        sendMessage() {
            if (this.newMessage.trim() === '' && !this.tempImageUrl) return;

            const userMessage = this.newMessage;
            const imageUrl = this.tempImageUrl;

            // Add user message to the UI immediately
            this.messages.push({
                role: 'user',
                content: userMessage,
                imageUrl: imageUrl
            });

            this.newMessage = '';
            this.clearAttachment();
            this.isTyping = true;
            this.scrollToBottom();

            // Prepare a new empty message for the AI response
            this.messages.push({ role: 'assistant', content: '' });
            const aiMessageIndex = this.messages.length - 1;

            // Construct the URL with parameters
            let url = `/api/chat/stream?message=${encodeURIComponent(userMessage)}`;
            if (imageUrl) {
                 url += `&imageUrl=${encodeURIComponent(imageUrl)}`;
            }

            // Initialize SSE
            this.eventSource = new EventSource(url);

            this.eventSource.onmessage = (event) => {
                const data = JSON.parse(event.data);

                if (data.event === 'done') {
                    this.closeConnection();
                    return;
                }

                // Append the incoming token to the last message
                this.messages[aiMessageIndex].content += data.token;
                this.scrollToBottom();
            };

            this.eventSource.onerror = (err) => {
                console.error("EventSource failed:", err);
                this.closeConnection();
                // Optional: Add an error message to the UI
                 this.messages[aiMessageIndex].content += "\n\n*[Error: Connection interrupted]*";
            };
        },
        closeConnection() {
            if (this.eventSource) {
                this.eventSource.close();
                this.eventSource = null;
            }
            this.isTyping = false;
        }
    },
    beforeDestroy() {
        this.closeConnection();
    }
};
</script>

<style scoped>
/* Very Basic Styling */
.chat-container { display: flex; flex-direction: column; height: 500px; border: 1px solid #ccc; }
.chat-messages { flex-grow: 1; overflow-y: auto; padding: 15px; }
.message { margin-bottom: 15px; padding: 10px; border-radius: 8px; max-width: 80%; }
.message-user { align-self: flex-end; background-color: #dcf8c6; margin-left: auto; }
.message-ai { align-self: flex-start; background-color: #f1f0f0; }
.chat-input { display: flex; padding: 10px; border-top: 1px solid #ccc; }
.chat-input input[type="text"] { flex-grow: 1; margin: 0 10px; }
.message-image { max-width: 200px; margin-top: 10px; border-radius: 5px; }
.image-preview { padding: 5px 10px; background: #eee; font-size: 0.9em; }
</style>

3. Can the Chat Interface Be Interactive Like You (ChatGPT)?

Yes, absolutely. The architecture you’ve designed (Shared MongoDB + n8n LLM) already provides the foundation for this. Here is what makes the experience “interactive” and how your setup achieves it:
  1. Context Memory (Crucial):
  • ChatGPT remembers what you said 5 minutes ago.
  • Your solution: You are using MongoDB to store the conversations collection. Before n8n calls the AI, it must read the previous messages from MongoDB and include them in the prompt sent to the LLM. This is the key to interactivity.
  1. Streaming Responses:
  • You see the words appear as they are generated, which feels alive.
  • Your solution: Using SSE (Server-Sent Events) as demonstrated above provides this exact experience.
  1. Rich Formatting:
  • ChatGPT returns code blocks, lists, bold text, and tables.
  • Your solution: The LLM (e.g., OpenAI) will return Markdown by default. By using a library like marked in your Vue component (as shown in the code above), you convert that Markdown into HTML, giving you the same rich output.
  1. Capability Access (Tools):
  • Being “interactive” in an enterprise context means the AI can do things, not just talk.
  • Your solution: This is where n8n shines. You can give your AI agent “tools.”
  • User asks: “Check the status of order #123.”
  • n8n Workflow: The AI Agent node decides it needs to use the “Check Order Tool.” n8n then executes a Postgres/MySQL node (connecting to your Laravel DB) or an HTTP Request node (calling your internal ERP API) to get the data and feed it back to the AI to generate the answer.

4. How to Implement Multimodal Prompts (Uploading Images)

Handling image uploads with SSE requires a two-step process because EventSource (SSE) only supports GET requests, and you cannot upload a file body in a GET request.

The Process Flow

  1. User Selects Image: In Vue, the user clicks the paperclip and selects an image.
  2. Vue Uploads Image: Vue immediately sends a standard POST request (using Axios, NOT EventSource) to a Laravel endpoint, uploading the file.
  3. Laravel Saves Image: Laravel receives the file, validates it, and stores it (e.g., in storage/app/public or on AWS S3).
  4. Laravel Returns URL: Laravel responds to Vue with the public URL of the saved image.
  5. Vue Initiates Chat: Vue now initiates the SSE connection, passing both the user’s text message AND the image URL as query parameters.
  6. Laravel -> n8n: Laravel passes both the text and the image URL to the n8n webhook.
  7. n8n AI Agent: The n8n AI node (must use a model that supports vision, like gpt-4o, gpt-4-turbo, or claude-3-opus) receives the prompt and the image URL and processes both.

Laravel Implementation

Route:
// routes/api.php
Route::post('/chat/upload', [ChatController::class, 'upload']);
Route::get('/chat/stream', [ChatController::class, 'stream']);
Controller:
// app/Http/Controllers/ChatController.php
public function upload(Request $request)
{
    $request->validate([
        'image' => 'required|image|max:10240', // Max 10MB
    ]);

    if ($request->file('image')) {
        // Store in the 'public' disk (storage/app/public/chat-uploads)
        // Ensure you have run `php artisan storage:link`
        $path = $request->file('image')->store('chat-uploads', 'public');

        return response()->json([
            'url' => asset('storage/' + $path)
        ]);
    }

    return response()->json(['error' => 'Upload failed'], 400);
}

public function stream(Request $request)
{
    $message = $request->input('message');
    $imageUrl = $request->input('imageUrl'); // Capture the image URL

    // ... (Save user message, including imageUrl to MongoDB) ...

    $response = new StreamedResponse(function() use ($message, $imageUrl) {
         // Pass BOTH the message and the imageUrl to your n8n service
         $fullResponse = app(N8nAgentService::class)->sendMessage(
             $message,
             auth()->id(),
             $imageUrl
         );

         // ... (streaming logic remains the same) ...
    });
    // ...
    return $response;
}

n8n Implementation

In n8n, ensure your Webhook trigger is set to receive the imageUrl in the JSON body or query parameters. Then, map that imageUrl into the “Image” or “Attachment” field of your AI Model node.