Day 8 – Caching GPT Recommendations for Speed and Efficiency #LaravelGPT #AICaching #OpenAI #PerformanceOptimization #LaravelCache #SmartUX


Today, we’ll cache GPT product recommendations so that users don’t trigger a new API call on every page load. This reduces costs, improves speed, and keeps recommendations fresh for a limited time.


🧠 Step 1: Wrap GPT logic with cache

In ProductRecommendationService, update the recommend() method:

use Illuminate\Support\Facades\Cache;

public function recommend(User $user)
{
    return Cache::remember("recommendations_user_{$user->id}", now()->addHours(6), function () use ($user) {
        // existing GPT logic...
        $viewed = ...;
        $clicked = ...;

        $messages = [
            [
                'role' => 'user',
                'content' => 'Here is what the user viewed: ' . json_encode($viewed)
            ],
            [
                'role' => 'user',
                'content' => 'Here is what the user clicked: ' . json_encode($clicked)
            ],
            [
                'role' => 'user',
                'content' => 'Suggest 5 relevant product ideas based on both viewing and clicking behavior.'
            ],
        ];

        $function = [
            'name' => 'suggest_products',
            'description' => 'Suggest relevant products for a user based on viewed and clicked products',
            'parameters' => [
                'type' => 'object',
                'properties' => [
                    'suggestions' => [
                        'type' => 'array',
                        'items' => ['type' => 'string'],
                    ],
                ],
                'required' => ['suggestions'],
            ],
        ];

        $response = OpenAI::chat()->create([
            'model' => 'gpt-4-0613',
            'messages' => $messages,
            'functions' => [$function],
            'function_call' => ['name' => 'suggest_products'],
        ]);

        $arguments = json_decode(
            $response['choices'][0]['message']['function_call']['arguments'] ?? '{}',
            true
        );

        $suggestions = $arguments['suggestions'] ?? [];

        return Product::query()
            ->where(function ($query) use ($suggestions) {
                foreach ($suggestions as $term) {
                    $query->orWhere('name', 'LIKE', "%$term%");
                }
            })
            ->limit(5)
            ->get();
    });
}

🔁 Step 2: Invalidate cache when needed

In your controller or view route, you can optionally allow forced refresh:

Route::get('/recommendations', function (Request $request, ProductRecommendationService $service) {
    $user = \App\Models\User::first(); // demo

    if ($request->get('refresh')) {
        Cache::forget("recommendations_user_{$user->id}");
    }

    $products = $service->recommend($user);

    return view('recommendations.index', compact('products'));
});

Now add ?refresh=1 to force-refresh GPT recommendations.

See also  Day 4 – Matching GPT Suggestions to Real Products in Laravel #LaravelGPT #ProductMatching #AIRecommendations #EcommerceAI #PersonalizedUX

✅ Summary

✅ Today you:

  • Cached GPT product recommendations using Laravel’s Cache
  • Reduced OpenAI API usage for repeated page loads
  • Enabled cache expiry and manual refresh

✅ Up next (Day 9): We’ll create a dashboard chart showing top clicked GPT-recommended products across all users for analytics and trends.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.