Building The Future: What I Learnt At Deeplearning.ai's AI Dev 25 Conference

Published Date
Beyond the Hype: Practical AI Development Trends in 2025
AI is evolving faster than most of us can keep up with. I recently attended DeepLearning.ai AI Dev 25 in San Francisco (March 14, 2025), where industry leaders from Google, NVIDIA, IBM, and other tech giants shared insights that cut through the noise. Here's what you need to know about where AI development is heading in 2025.
The Shift from Building to Applying
Andrew Ng's keynote speech captured the conference's central theme perfectly: master the technology layers, then build the applications on top of them. We're moving past the phase of endless model training to focus on what actually matters—solving real problems.
"Ignore the hype like 'coding is dead'... invest instead in courses and learning."— Laurence Moroney, Building AI Applications, Panel Discussion
This shift brings good news: as coding becomes easier with AI assistance, you can do more of it. The traditional "plan forever, build slowly" approach is giving way to a more nimble "just code" mentality—while still staying responsible.
.jpg)
Agentic AI: Your Digital Teammate
Google's Project Astra, highlighted by Paige Bailey (Technical Director Engineering Lead at Google DeepMind), represents the future of AI agents—an assistant in your pocket that handles tasks faster than ever before. Meanwhile, Meta is rapidly advancing its Llama models with different versions optimized for specific jobs.
What makes these new AI agents special?
- They continuously update from large information sources
- They function within defined workflows
- They're becoming more reliable through better evaluation methods
A fascinating example Bailey shared was how these agents can analyse content autonomously—in the example she gave, she created a complete table of the different types of dinosaur that appeared in a 10 min YouTube video without explicit instructions on how to identify them.
Memory Systems: The Secret Ingredient
Here's something most AI discussions miss: memory is just as important as reasoning. Harrison Chase, Co-Founder and CEO of LangChain, delivered an eye-opening presentation on why memory systems are crucial for next-generation AI. He broke down three types of AI memory:
- Semantic memory: Facts and knowledge
- Episodic memory: Events and experiences
- Procedural memory: How to do things
As Chase noted, "No one is a good prompter—it's too hard to forecast all the variables ahead of time—thus memory is important to help you improve over time." He demonstrated how Replit is built on top of LangChain, leveraging these memory capabilities to create more powerful development environments.
Apoorva Joshi from MongoDB further emphasized that "conversational AI memory is short—this limits utility and long-term contextual awareness." The solution? Vector search for RAG (Retrieval-Augmented Generation) that focuses on intent and meaning.
Jeff Katzler, CTO & Founder of Chroma, reinforced this point: "Reasoning gets the focus... but memory is important and getting more focus over time." He argued that memory enables AI to learn from past interactions rather than starting from scratch each time.
On-Device AI: Intelligence at the Edge
The Qualcomm presentation revealed that modern smartphones already run 25 different AI models just to process a single photo. This showcases the significant shift toward on-device AI processing.
Bryan Catanzaro, Vice President of Applied Deep Learning Research at NVIDIA, also emphasized this trend, noting how important it is for CTOs to assess the location of their infrastructure based on their specific priorities.
Why is on-device AI becoming so important? Three compelling reasons:
- Speed: Real-time means under 25ms response time
- Privacy: Process sensitive data locally without sending it to the cloud
- Reliability: Work even without internet connection
Qualcomm has even released a calculator to help developers determine if their models can meet latency requirements on specific devices, making it easier to develop practical on-device AI solutions.
Tools and Frameworks Worth Watching
Several tools and frameworks stood out among the many mentioned by speakers:
- Google AI Studio – Paige Bailey demonstrated its code execution capabilities and image manipulation tools
- Crew.ai – Multiple presenters highlighted how it helps create agentic workflows for business processes
- Groq – Noted for offering the fastest inference time for conversational flows
- LangChain – Harrison Chase showed how it builds memory into LLM systems
- IBM BeeAI – Ismael Faro (VP Quantum AI at IBM Research) introduced this open-source platform for agent-to-agent communication
- Project Marionor – A Google DeepMind AI framework that leverages the web to solve complex prompts
- LangMem – A specialised package from LangChain focusing on memory tools
Evaluating AI: Beyond the Benchmarks
How do you know if your AI system actually works? This question was explored in depth during a panel featuring Krishna Gade (VP of Engineering at Evidently) and others focused on evaluation methodologies.
The consensus was clear: "All benchmarks are wrong—think about the benchmarks as a surrogate." This powerful insight shifts how we should approach AI evaluation.
Instead of chasing numbers, the experts recommended:
- Mapping benchmark information to what you actually care about
- Understanding what the numbers actually mean in your specific context
- Identifying exactly what's going wrong with your model or agent
- Testing in reproducible ways that reflect real-world usage
As Krishna Gade put it: "You are what you measure, make sure that there's clarity on what good looks like."
Percy Liang from Stanford University added valuable perspective on agentic evaluation, noting that while benchmarks like those on Hugging Face are useful, we're still in the early days of understanding how to properly evaluate complex AI agent behaviors.
Practical Advice for AI Developers
The most valuable takeaways weren't about specific models but about approaches:
- Cut through the noise by focusing on solving a tangible problem
- Build on your expertise rather than chasing every new AI trend
- Build in public to get feedback and improve faster
- Focus on the problem, not the tech stack, which will inevitably change
Agent-to-Agent Communication: The Next Frontier
Ismael Faro from IBM Research presented a fascinating look at the future of agent-to-agent communication. "Lots of frameworks for building agents exist, but it's hard to switch from one to another," he explained. This fragmentation creates significant rework as enterprises build AI solutions.
IBM's BeeAI is attempting to solve this problem by creating open standards for how agents communicate with each other. Faro demonstrated how one agent could discover and utilize the capabilities of another—potentially leading to networks of specialized AI agents working together.
What's Next?
Several speakers highlighted emerging areas to watch:
- Robotics is poised for AI-driven transformation
- Scientific applications beyond just writing and coding applications
- Self-improving systems that learn from their own mistakes
- Google DeepMind's "Conscientious" – a fleet of Gemini agents capable of conducting research tasks autonomously
Conclusion
After a full day of talks at AI Dev 25, one message was clear: the future belongs to those who can apply AI to solve real problems, not those who get caught up in the hype.
Andrew Ng's advice resonated throughout the conference: "Focus on the problem rather than the tech stack as that will change." This practical wisdom is especially valuable in a field evolving as rapidly as AI.
The most exciting part? We're still just at the beginning of this journey. AI tools are finally becoming accessible and practical enough for everyday use in both consumer and business contexts.
What AI application are you most excited about? Let me know in the comments!
Want to learn more about implementing AI in your projects? Check out these resources:
- DeepLearning.AI's course on LangChain
- Hugging Face's Agent Courses
- Qualcomm's On-Device AI Resources
- IBM BeeAI GitHub Repository
- Jeff Katzler's GitHub for memory implementation code