I recently went through an interesting experience during my master’s internship. I was tasked with creating an AI solution, and I tried every possible approach I could think of. While I managed to achieve some average results, they were unstable and didn’t quite meet the expectations. Despite the challenges, I was recruited by the company, and they asked me to continue working on the project to make it more stable and reliable.
The problem I’m facing is that the Large Language Model (LLM) is responsible for most of the errors. I’ve tried every solution possible, from researching new techniques to practicing different approaches, but I’m still hitting a wall. It’s frustrating, but it’s also a great learning opportunity. I’m realizing that creating a stable AI solution is much more complex than I initially thought.
I’m sharing my experience in the hopes that it might help others who are facing similar challenges. Have you ever worked on an AI project that seemed simple at first but turned out to be much more complicated? How did you overcome the obstacles, and what did you learn from the experience?
In my case, I’m still trying to figure out the best approach to stabilize the LLM and improve the overall performance of the AI solution. If you have any suggestions or advice, I’d love to hear them. Let’s discuss the challenges of creating reliable AI solutions and how we can learn from each other’s experiences.


