Andrej Karpathy's keynote on June 17, 2025 at AI Startup School in San Francisco. Slides provided by Andrej: https://drive.google.com/file/d/1a0h1mkwfmV2Plek...
Today's L&D is more than just content. Or at least it should be.
When we think about AI in L&D, we often think about AI in learning design. Yet, to meet the needs of the business, L&D leaders need to orchestrate design, data, decisions and dialogue- incidentally, these are all things that AI can help with.
In ๐ฅ๐๐๐ซ๐ง๐ข๐ง๐ ๐๐๐ฌ๐ข๐ ๐ง, we already extensively use AI not just for content production, but also for user research, as a sparring partner and a sounding board (that was one of the top write-in use cases in mine and Donald's AI in L&D survey last year).
In ๐ฅ๐๐๐ซ๐ง๐ข๐ง๐ ๐ฌ๐ญ๐ซ๐๐ญ๐๐ ๐ฒ, AI can help make sense of business, people and skills data (featured use case: asking AI to find gaps in learning or performance support provision in your organisation), or work as a thought partner to help you bridge learning and business strategy. Crucially, it can also help you engage stakeholders by preparing you for conversations and tailoring your communications to different audiences.
In terms of ๐ฉ๐๐ซ๐ฌ๐จ๐ง๐๐ฅ๐ข๐ฌ๐๐ ๐ฌ๐ฎ๐ฉ๐ฉ๐จ๐ซ๐ญ, AI interacts directly with employees to help them do their jobs: practise tricky conversations through role-plays and personalised feedback, prioritise and contextualise learning content to their needs, and, lately, retrieve exactly the information they need from almost anywhere in the companyโs knowledge base.
Finally, in ๐ฅ๐๐๐ซ๐ง๐ข๐ง๐ ๐จ๐ฉ๐๐ซ๐๐ญ๐ข๐จ๐ง๐ฌ, AI can help do more than just draft emails and reports. Working together with humans, AI can help select the right vendors for the learning ecosystem, streamline employee help desk operations, analyse, make sense of and action on different kinds of data generated in L&D, and, of course, help L&D communicate with the rest of the business.
Researcher, producer, thought partner, communicator โ if your organisation only uses AI to write scripts, youโre leaving three quarters of the L&D value chain on the table.
I like a good table, and I hope this one will help you think about how to get more value out of your AI use.
---
P.S. I spent quite a lot of time arguing with myself about the dots on the table. Feel free to disagree and suggest AI roles or use cases that I have missed!
Nodes #GenAI #Learning #Talent #FutureOfWork #AIAdoption | 50 comments on LinkedIn
Distinguishing performance gains from learning when using generative AI - published in Nature Reviews Psychology!
Excited to share our latest commentary just published in Nature Reviews Psychology! โจ
""ย
ย
Generative AI tools such as ChatGPT are reshaping education, promising improvements in learner performance and reduced cognitive load. ๐ค
๐คBut here's the catch: Do these immediate gains translate into deep and lasting learning?
Reflecting on recent viral systematic reviews and meta-analyses on #ChatGPT and #Learning, we argue that educators and researchers need to clearly differentiate short-term performance benefits from genuine, durable learning outcomes. ๐ก
๐ Key takeaways:
โ Immediate boosts with generative AI tools don't necessarily equal durable learning
โ While generative AI can ease cognitive load, excessive reliance might negatively impact critical thinking, metacognition, and learner autonomy
โ Long-term, meaningful skill development demands going beyond immediate performance metrics
๐ Recommendations for future research and practice:
1๏ธโฃ Shift toward assessing retention, transfer, and deep cognitive processing
2๏ธโฃ Promote active learner engagement, critical evaluation, and metacognitive reflection
3๏ธโฃ Implement longitudinal studies exploring the relationship between generative AI assistance and prior learner knowledge
Special thanks ๐ to my amazing collaborators and mentors, Samuel Greiff, Jason M. Lodge, and Dragan Gasevic, for their invaluable contributions, guidance, and encouragement. A big shout-out to Dr. Teresa Schubert for her insightful comments and wonderful support throughout the editorial process! ๐
๐ Full article here: https://lnkd.in/g3YDQUrH
๐ Full-text Access (view-only version): https://rdcu.be/erwIt
#GenerativeAI #ChatGPT #AIinEducation #LearningScience #Metacognition #Cognition #EdTech #EducationalResearch #BJETspecialIssue #NatureReviewsPsychology #FutureOfEducation #OpenScience
In a now viral study, researchers examined how using ChatGPT for essay writing affects our brains and cognitive abilities.
In a now viral study, researchers examined how using ChatGPT for essay writing affects our brains and cognitive abilities. They divided participants into three groups: one using ChatGPT, one using search engines, and one using just their brains. Through EEG monitoring, interviews, and analysis of the essays, they discovered some not surprising results about how AI use impacts learning and cognitive engagement.
There were five key takeaways for me (although this is not an exhaustive list), within the context of this particular study:
1. The Cognitive Debt Issue
The study indicates that participants who used ChatGPT exhibited the weakest neural connectivity patterns when compared to those relying on search engines or unaided cognition. This suggests that defaulting to generative AI may function as an intellectual shortcut, diminishing rather than strengthening cognitive engagement.
Researchers are increasingly describing the tradeoff between short-term ease and productivity and long-term erosion of independent thinking and critical skills as โcognitive debt.โ This parallels the concept of technical debt, when developers prioritise quick solutions over robust design, leading to hidden costs, inefficiencies, and increased complexity downstream.
2. The Memory Problem
Strikingly, users of ChatGPT had difficulty recalling or quoting from essays they had composed only minutes earlier. This undermines the notion of augmentation; rather than supporting cognitive function, the tool appears to offload essential processes, impairing retention and deep processing of information.
3. The Ownership Gap
Participants who used ChatGPT reported a reduced sense of ownership over their work. If we normalise over-reliance on AI tools, we risk cultivating passive knowledge consumers rather than active knowledge creators.
4. The Homogenisation Effect
Analysis showed that essays from the LLM group were highly uniform, with repeated phrases and limited variation, suggesting reduced cognitive and expressive diversity. In contrast, the Brain-only group produced more varied and original responses. The Search group fell in between.
5. The Potential for Constructive Re-engagement ๐ง ๐ค ๐ค ๐ค
There is, however, promising evidence for meaningful integration of AI when used in conjunction with prior unaided effort:
โThose who had previously written without tools (Brain-only group), the so-called Brain-to-LLM group, exhibited significant increase in brain connectivity across all EEG frequency bands when allowed to use an LLM on a familiar topic. This suggests that AI-supported re-engagement invoked high levels of cognitive integration, memory reactivation, and top-down control.โ
This points to the potential for AI to enhance cognitive function when it is used as a complement to, rather than a substitute for, initial human effort.
At over 200 pages, expect multiple paper submissions out of this extensive body of work.
https://lnkd.in/gzicDHp2 | 16 comments on LinkedIn
๐๐จ, ๐ฒ๐จ๐ฎ๐ซ ๐๐ซ๐๐ข๐ง ๐๐จ๐๐ฌ ๐ง๐จ๐ญ ๐ฉ๐๐ซ๐๐จ๐ซ๐ฆ ๐๐๐ญ๐ญ๐๐ซ ๐๐๐ญ๐๐ซ ๐๐๐ ๐จ๐ซ ๐๐ฎ๐ซ๐ข๐ง๐ ๐๐๐ ๐ฎ๐ฌ๐.
See our paper for more results:ย "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task" (link in the comments).
For 4 months, 54 students were divided into three groups: ChatGPT, Google -ai, and Brain-only. Across 3 sessions, each wrote essays on SAT prompts. In an optional 4th session, participants switched: LLM users used no tools (LLM-to-Brain), and Brain-only group used ChatGPT (Brain-to-LLM).
๐
๐. ๐๐๐ ๐๐ง๐ ๐๐ฌ๐ฌ๐๐ฒ ๐๐จ๐ง๐ญ๐๐ง๐ญ
- LLM Group: Essays were highly homogeneous within each topic, showing little variation. Participants often relied on the same expressions or ideas.
- Brain-only Group: Diverse and varied approaches across participants and topics.
- Search Engine Group: Essays were shaped by search engine-optimized content; their ontology overlapped with the LLM group but not with the Brain-only group.
๐๐. ๐๐ฌ๐ฌ๐๐ฒ ๐๐๐จ๐ซ๐ข๐ง๐ (๐๐๐๐๐ก๐๐ซ๐ฌ ๐ฏ๐ฌ. ๐๐ ๐๐ฎ๐๐ ๐)
- Teachersย detected patterns typical of AI-generated content and scoring LLM essays lower for originality and structure.
- AI Judgeย gave consistently higher scores to LLM essays, missing human-recognized stylistic traits.
๐๐๐: ๐๐๐ ๐๐ง๐๐ฅ๐ฒ๐ฌ๐ข๐ฌ
Connectivity: Brain-only group showed the highest neural connectivity, especially in alpha, theta, and delta bands. LLM users had the weakest connectivity, up to 55% lower in low-frequency networks. Search Engine group showed high visual cortex engagement, aligned with web-based information gathering.
๐บ๐๐๐๐๐๐ 4 ๐น๐๐๐๐๐๐:
- LLM-to-Brain (๐ค๐ค๐ค๐ง ) participantsย underperformed cognitively with reduced alpha/beta activity and poor content recall.
- Brain-to-LLM (๐ง ๐ง ๐ง ๐ค) participantsย showed strong re-engagement, better memory recall, and efficient tool use.
LLM-to-Brain participants had potential limitations in achieving robust neural synchronization essential for complex cognitive tasks.
Results forย Brain-to-LLM participantsย suggest that strategic timing of AI tool introduction following initial self-driven effort may enhance engagement and neural integration.
๐๐. ๐๐๐ก๐๐ฏ๐ข๐จ๐ซ๐๐ฅ ๐๐ง๐ ๐๐จ๐ ๐ง๐ข๐ญ๐ข๐ฏ๐ ๐๐ง๐ ๐๐ ๐๐ฆ๐๐ง๐ญ
- Quoting Ability: LLM users failed to quote accurately, while Brain-only participants showed robust recall and quoting skills.
- Ownership: Brain-only group claimed full ownership of their work; LLM users expressed either no ownership or partial ownership.
- Critical Thinking: Brain-only participants cared more aboutย ๐ธ๐ฉ๐ข๐ตย andย ๐ธ๐ฉ๐บย they wrote; LLM users focused onย ๐ฉ๐ฐ๐ธ.
- Cognitive Debt: Repeated LLM use led to shallow content repetition and reduced critical engagement. This suggests a buildup of "cognitive debt", deferring mental effort at the cost of long-term cognitive depth.
Support and share! โค๏ธ
#MIT #AI #Brain #Neuroscience #CognitiveDebt | 54 comments on LinkedIn
Du kannst jetzt das passende Modell fรผr deinen CustomGPT auswรคhlen.
Na endlich!
Du kannst jetzt das passende Modell fรผr deinen CustomGPT auswรคhlen.
CustomGPTs sind fรผr mich das beste Feature in ChatGPT und wurden in den letzten 12 Monaten stark vernachlรคssigt.
Mit der Modell-Auswahl kommt jetzt das erste gute Upgrade.
Mini-Guide zur Modell-Auswahl:
o3 -> Komplexe Problemstellungen und Datenanalyse
4.5 -> Kreative Aufgaben und Copywriting
4o -> Bild-Verarbeitung
4.1 -> Coding
Alle anderen Modelle benรถtigt man mMn nicht.
Mein Strategieberater bekommt zum Beispiel o3 hinterlegt (bessere Planungsfรคhigkeit in komplexen Aufgaben), wohingegen der Hook Writer GPT4.5 bekommt (besserer Schreibstil).
Wenn du die CustomGPTs selbst nutzen willst:
80+ Vorlagen frei verfรผgbar in unserer Assistenten-Datenbank ๐
P.S. Wie findest du das Update? | 15 Kommentare auf LinkedIn
99% ๐ผ๐ณ ๐ฝ๐ฒ๐ผ๐ฝ๐น๐ฒ ๐ด๐ฒ๐ ๐๐ต๐ถ๐ ๐๐ฟ๐ผ๐ป๐ด: ๐ง๐ต๐ฒ๐ ๐๐๐ฒ ๐๐ต๐ฒ ๐๐ฒ๐ฟ๐บ๐ ๐๐ ๐๐ด๐ฒ๐ป๐ ๐ฎ๐ป๐ฑ ๐๐ด๐ฒ๐ป๐๐ถ๐ฐ ๐๐ ๐ถ๐ป๐๐ฒ๐ฟ๐ฐ๐ต๐ฎ๐ป๐ด๐ฒ๐ฎ๐ฏ๐น๐ โ ๐ฏ๐๐ ๐๐ต๐ฒ๐ ๐ฑ๐ฒ๐๐ฐ๐ฟ๐ถ๐ฏ๐ฒ ๐๐๐ผ ๐ณ๐๐ป๐ฑ๐ฎ๐บ๐ฒ๐ป๐๐ฎ๐น๐น๐ ๐ฑ๐ถ๐ณ๐ณ๐ฒ๐ฟ๐ฒ๐ป๐ ๐ฎ๐ฟ๐ฐ๐ต๐ถ๐๐ฒ๐ฐ๐๐๐ฟ๐ฒ๐! โฌ๏ธ
Letโs clarify it once and for all: โฌ๏ธ
1. ๐๐ ๐๐ด๐ฒ๐ป๐๐: ๐ง๐ผ๐ผ๐น๐ ๐๐ถ๐๐ต ๐๐๐๐ผ๐ป๐ผ๐บ๐, ๐ช๐ถ๐๐ต๐ถ๐ป ๐๐ถ๐บ๐ถ๐๐
โ AI agents are modular, goal-directed systems that operate within clearly defined boundaries. Theyโre built to:
* Use tools (APIs, browsers, databases)
* Execute specific, task-oriented workflows
* React to prompts or real-time inputs
* Plan short sequences and return actionable outputs
๐๐ฉ๐ฆ๐บโ๐ณ๐ฆ ๐ฆ๐น๐ค๐ฆ๐ญ๐ญ๐ฆ๐ฏ๐ต ๐ง๐ฐ๐ณ ๐ต๐ข๐ณ๐จ๐ฆ๐ต๐ฆ๐ฅ ๐ข๐ถ๐ต๐ฐ๐ฎ๐ข๐ต๐ช๐ฐ๐ฏ, ๐ญ๐ช๐ฌ๐ฆ: ๐๐ถ๐ด๐ต๐ฐ๐ฎ๐ฆ๐ณ ๐ด๐ถ๐ฑ๐ฑ๐ฐ๐ณ๐ต ๐ฃ๐ฐ๐ต๐ด, ๐๐ฏ๐ต๐ฆ๐ณ๐ฏ๐ข๐ญ ๐ฌ๐ฏ๐ฐ๐ธ๐ญ๐ฆ๐ฅ๐จ๐ฆ ๐ด๐ฆ๐ข๐ณ๐ค๐ฉ, ๐๐ฎ๐ข๐ช๐ญ ๐ต๐ณ๐ช๐ข๐จ๐ฆ, ๐๐ฆ๐ฆ๐ต๐ช๐ฏ๐จ ๐ด๐ค๐ฉ๐ฆ๐ฅ๐ถ๐ญ๐ช๐ฏ๐จ, ๐๐ฐ๐ฅ๐ฆ ๐ด๐ถ๐จ๐จ๐ฆ๐ด๐ต๐ช๐ฐ๐ฏ๐ด
But even the most advanced are limited by scope. They donโt initiate. They donโt collaborate. They execute what we ask!
2. ๐๐ด๐ฒ๐ป๐๐ถ๐ฐ ๐๐: ๐ ๐ฆ๐๐๐๐ฒ๐บ ๐ผ๐ณ ๐ฆ๐๐๐๐ฒ๐บ๐
โ Agentic AI is an architectural leap. Itโs not just one smarter agent โ itโs multiple specialized agents working together toward shared goals. These systems exhibit:
* Multi-agent collaboration
* Goal decomposition and role assignment
* Inter-agent communication via memory or messaging
* Persistent context across time and tasks
* Recursive planning and error recovery
* Distributed orchestration and adaptive feedback
Agentic AI systems donโt just follow instructions. They coordinate. They adapt. They manage complexity.
๐๐น๐ข๐ฎ๐ฑ๐ญ๐ฆ๐ด ๐ช๐ฏ๐ค๐ญ๐ถ๐ฅ๐ฆ: ๐ณ๐ฆ๐ด๐ฆ๐ข๐ณ๐ค๐ฉ ๐ต๐ฆ๐ข๐ฎ๐ด ๐ฑ๐ฐ๐ธ๐ฆ๐ณ๐ฆ๐ฅ ๐ฃ๐บ ๐ข๐จ๐ฆ๐ฏ๐ต๐ด, ๐ด๐ฎ๐ข๐ณ๐ต ๐ฉ๐ฐ๐ฎ๐ฆ ๐ฆ๐ค๐ฐ๐ด๐บ๐ด๐ต๐ฆ๐ฎ๐ด ๐ฐ๐ฑ๐ต๐ช๐ฎ๐ช๐ป๐ช๐ฏ๐จ ๐ฆ๐ฏ๐ฆ๐ณ๐จ๐บ/๐ด๐ฆ๐ค๐ถ๐ณ๐ช๐ต๐บ, ๐ด๐ธ๐ข๐ณ๐ฎ๐ด ๐ฐ๐ง ๐ณ๐ฐ๐ฃ๐ฐ๐ต๐ด ๐ช๐ฏ ๐ญ๐ฐ๐จ๐ช๐ด๐ต๐ช๐ค๐ด ๐ฐ๐ณ ๐ข๐จ๐ณ๐ช๐ค๐ถ๐ญ๐ต๐ถ๐ณ๐ฆ ๐ฎ๐ข๐ฏ๐ข๐จ๐ช๐ฏ๐จ ๐ณ๐ฆ๐ข๐ญ-๐ต๐ช๐ฎ๐ฆ ๐ถ๐ฏ๐ค๐ฆ๐ณ๐ต๐ข๐ช๐ฏ๐ต๐บ
๐ง๐ต๐ฒ ๐๐ผ๐ฟ๐ฒ ๐๐ถ๐ณ๐ณ๐ฒ๐ฟ๐ฒ๐ป๐ฐ๐ฒ?
AI Agents = autonomous tools for single-task execution
Agentic AI = orchestrated ecosystems for workflow-level intelligence
๐ก๐ผ๐ ๐น๐ผ๐ผ๐ธ ๐ฎ๐ ๐๐ต๐ฒ ๐ฝ๐ถ๐ฐ๐๐๐ฟ๐ฒ: โฌ๏ธ
๐ข๐ป ๐๐ต๐ฒ ๐น๐ฒ๐ณ๐: a smart thermostat, which can be an AI Agent. It keeps your room at 21ยฐC. Maybe it learns your schedule. But itโs working alone.
๐ข๐ป ๐๐ต๐ฒ ๐ฟ๐ถ๐ด๐ต๐: Agentic AI. A full smart home ecosystem โ weather-aware, energy-optimized, schedule-sensitive. Agents talk to each other. They share data. They make coordinated decisions to optimize your comfort, cost, and security in real time.
Thatโs the shift = From pure task automation to goal-driven orchestration. From single-agent logic to collaborative intelligence. This is whatโs coming = This is Agentic AI. And if we confuse โagentโ with โagentic,โ we risk underbuilding for what AI is truly capable of.
The Cornell University paper in the comments on this topic is excellent! โฌ๏ธ | 186 comments on LinkedIn
In fact, after guiding many organisations on this journey over the past few years, I've noticed two consistent drivers of AI adoption:
โข A culture that encourages experimentation
โข A strategic mandate from leadership that unlocks time, resources, and the infrastructure needed to make AI work at scale
Without both, even the most powerful tools are used at a fraction of their potential, leaving the promise of AI unrealised and considerable investments wasted.
โก๏ธ If you have a conservative organisational culture, one that disincentivises taking risks and change, and there's no clear mandate to use AI, you'll have ๐ถ๐ฑ๐น๐ฒ ๐ฝ๐ผ๐๐ฒ๐ป๐๐ถ๐ฎ๐น. Try as you might, AI training will hardly translate into people using AI in their work. The knowledge might be there but the impact isn't.
โก๏ธ If you have an innovation culture, one where experimentation is encouraged, but where people are unsure if they're allowed to use AI, you'll have ๐ฐ๐ฎ๐๐๐ฎ๐น ๐ฒ๐ ๐ฝ๐ฒ๐ฟ๐ถ๐บ๐ฒ๐ป๐๐ - some people tinkering on their own, finding useful use cases and workarounds, but with no way to accumulate, build on, and spread this knowledge. That's where a lot of organisations find themselves in 2025 - the majority of employees are using AI in some form, yet their efforts are siloed and scattered.
โก๏ธ If you have both an innovation culture *and* and an active mandate, you're ๐ฝ๐ถ๐ผ๐ป๐ฒ๐ฒ๐ฟ๐ถ๐ป๐ด ๐ถ๐ป๐ป๐ผ๐๐ฎ๐๐ถ๐ผ๐ป and there are few companies still at your level. That's an exciting place to be! That's also where a lot of organisations imagine they would get to as soon as they teach people to use AI, often without first doing the culture and mandate work.
โก๏ธ If your organisation encourages the use of AI but your conservative culture keeps hitting the brakes, you'll likely end up with ๐ฟ๐ฒ๐น๐๐ฐ๐๐ฎ๐ป๐ ๐ฐ๐ผ๐บ๐ฝ๐น๐ถ๐ฎ๐ป๐ฐ๐ฒ. That's also where a considerable number of organisations are right now: driven by expectations of benefits from AI adoption but burdened by processes that are incompatible with grassroots innovation.
There is a difference between individual and organisational AI adoption. Organisational adoption is frustratingly complex โ it requires coordination across departments and leaders, alignment with business priorities, and systems that enable change, not just enthusiasm.
Curiosity gets people started. Supportive systems turn momentum into scale.
Nodes #GenAI #AIAdoption #FutureOfWork #Talent
For the longest time we've had two main options to help people perform: upskilling or performance support. Just-in-case vs just-in-time. Push vs pull. With AI, we now have a third - enablement.
It's different from what we've had before:
๐๐ฉ๐ฌ๐ค๐ข๐ฅ๐ฅ๐ข๐ง๐ ("teach me") - commonly done through hands-on learning with feedback and reflection, such as scenario simulations, in-person role-plays, facilitated discussions, building and problem-solving. None of that has become less relevant, but AI has enabled scale through AI-enabled role-plays, coaching, and other avenues for personalised feedback.
๐๐๐ซ๐๐จ๐ซ๐ฆ๐๐ง๐๐ ๐ฌ๐ฎ๐ฉ๐ฉ๐จ๐ซ๐ญ ("help me") - support in the flow of work, previously often in the format of short how-to resources located in convenient places. AI has elevated that in at least two ways: through knowledge management, which helps retrieve the necessary, contextualised information in the workflow; and general & specialised copilots that enhance the speed and, arguably, the expertise of the employee.
Yet, ๐๐ง๐๐๐ฅ๐๐ฆ๐๐ง๐ญ (โdo it for meโ) is different โ it takes the task off your plate entirely. Weโve seen hints of it with automations, but the text and analysis capabilities of genAI mean that increasingly 'skilled' tasks are now up for grabs.
Case in point: where written communication was once a skill to be learned, email and report writing are now increasingly being handed off to AI. No skill required (for better or worse) โ AI does it for you.
But here's a plot twist: a lot of that enablement happens outside of L&D tech. It may happen in sales or design software, or even your general-purpose enterprise AI.
All of which points to a bigger shift: roles, tasks, and ways of working are changing โ and L&D must tune into how work is being reimagined to adapt alongside it.
Nodes #GenAI #Learning #Talent #FutureOfWork #AIAdoption | 13 comments on LinkedIn
The Alan Turing Institute ๐ฎ๐ป๐ฑ the LEGO Group ๐ฑ๐ฟ๐ผ๐ฝ๐ฝ๐ฒ๐ฑ ๐๐ต๐ฒ ๐ณ๐ถ๐ฟ๐๐ ๐ฐ๐ต๐ถ๐น๐ฑ-๐ฐ๐ฒ๐ป๐๐ฟ๐ถ๐ฐ ๐๐ ๐๐๐๐ฑ๐! โฌ๏ธ
(๐ ๐ฎ๐ถ๐ด๐ต-๐ณ๐ฆ๐ข๐ฅ โ ๐ฆ๐ด๐ฑ๐ฆ๐ค๐ช๐ข๐ญ๐ญ๐บ ๐ช๐ง ๐บ๐ฐ๐ถ ๐ฉ๐ข๐ท๐ฆ ๐ค๐ฉ๐ช๐ญ๐ฅ๐ณ๐ฆ๐ฏ.)
While most AI debates and studies focus on models, chips, and jobs โ this one zooms in on something far more personal: ๐ช๐ต๐ฎ๐ ๐ต๐ฎ๐ฝ๐ฝ๐ฒ๐ป๐ ๐๐ต๐ฒ๐ป ๐ฐ๐ต๐ถ๐น๐ฑ๐ฟ๐ฒ๐ป ๐ด๐ฟ๐ผ๐ ๐๐ฝ ๐๐ถ๐๐ต ๐ด๐ฒ๐ป๐ฒ๐ฟ๐ฎ๐๐ถ๐๐ฒ ๐๐?
They surveyed 1,700+ kids, parents, and teachers across the UK โ and what they found is both powerful and concerning.
๐๐ฒ๐ฟ๐ฒ ๐ฎ๐ฟ๐ฒ 9 ๐๐ต๐ถ๐ป๐ด๐ ๐๐ต๐ฎ๐ ๐๐๐ผ๐ผ๐ฑ ๐ผ๐๐ ๐๐ผ ๐บ๐ฒ ๐ณ๐ฟ๐ผ๐บ ๐๐ต๐ฒ ๐ฟ๐ฒ๐ฝ๐ผ๐ฟ๐: โฌ๏ธ
1. 1 ๐ถ๐ป 4 ๐ธ๐ถ๐ฑ๐ (8โ12 ๐๐ฟ๐) ๐ฎ๐น๐ฟ๐ฒ๐ฎ๐ฑ๐ ๐๐๐ฒ ๐๐ฒ๐ป๐๐ โ ๐บ๐ผ๐๐ ๐๐ถ๐๐ต๐ผ๐๐ ๐๐ฎ๐ณ๐ฒ๐ด๐๐ฎ๐ฟ๐ฑ๐
โ ChatGPT, Gemini, and even MyAI on Snapchat are now part of daily digital play.
2. ๐๐ ๐ถ๐ ๐ต๐ฒ๐น๐ฝ๐ถ๐ป๐ด ๐ธ๐ถ๐ฑ๐ ๐ฒ๐ ๐ฝ๐ฟ๐ฒ๐๐ ๐๐ต๐ฒ๐บ๐๐ฒ๐น๐๐ฒ๐ โ ๐ฒ๐๐ฝ๐ฒ๐ฐ๐ถ๐ฎ๐น๐น๐ ๐๐ต๐ผ๐๐ฒ ๐๐ถ๐๐ต ๐น๐ฒ๐ฎ๐ฟ๐ป๐ถ๐ป๐ด ๐ป๐ฒ๐ฒ๐ฑ๐
โ 78% of neurodiverse kids use ChatGPT to communicate ideas they struggle to express otherwise.
3. ๐๐ฟ๐ฒ๐ฎ๐๐ถ๐๐ถ๐๐ ๐ถ๐ ๐๐ต๐ถ๐ณ๐๐ถ๐ป๐ด โ ๐ฏ๐๐ ๐ป๐ผ๐ ๐ฟ๐ฒ๐ฝ๐น๐ฎ๐ฐ๐ถ๐ป๐ด
โ Kids still prefer offline tools (arts, crafts, games), even when they enjoy AI-assisted play. Digital is not (yet) the default.
4. ๐๐ ๐ฎ๐ฐ๐ฐ๐ฒ๐๐ ๐ถ๐ ๐ต๐ถ๐ด๐ต๐น๐ ๐๐ป๐ฒ๐พ๐๐ฎ๐น
โ 52% of private school students use GenAI, compared to only 18% in public schools. The next digital divide is already here.
5. ๐๐ต๐ถ๐น๐ฑ๐ฟ๐ฒ๐ป ๐ฎ๐ฟ๐ฒ ๐๐ผ๐ฟ๐ฟ๐ถ๐ฒ๐ฑ ๐ฎ๐ฏ๐ผ๐๐ ๐๐โ๐ ๐ฒ๐ป๐๐ถ๐ฟ๐ผ๐ป๐บ๐ฒ๐ป๐๐ฎ๐น ๐ถ๐บ๐ฝ๐ฎ๐ฐ๐
โ Some kids refused to use GenAI after learning about water and energy costs. Let that sink in.
6. ๐ฃ๐ฎ๐ฟ๐ฒ๐ป๐๐ ๐ฎ๐ฟ๐ฒ ๐ผ๐ฝ๐๐ถ๐บ๐ถ๐๐๐ถ๐ฐ โ ๐ฏ๐๐ ๐ฑ๐ฒ๐ฒ๐ฝ๐น๐ ๐๐ผ๐ฟ๐ฟ๐ถ๐ฒ๐ฑ
โ 76% support AI use, but 82% are scared of inappropriate content and misinformation. Only 41% fear cheating.
7. ๐ง๐ฒ๐ฎ๐ฐ๐ต๐ฒ๐ฟ๐ ๐ฎ๐ฟ๐ฒ ๐๐๐ถ๐ป๐ด ๐๐ โ ๐ฎ๐ป๐ฑ ๐น๐ผ๐๐ถ๐ป๐ด ๐ถ๐
โ 85% say GenAI boosts their productivity, 88% feel confident using it. Theyโre ahead of the curve.
8. ๐๐ฟ๐ถ๐๐ถ๐ฐ๐ฎ๐น ๐๐ต๐ถ๐ป๐ธ๐ถ๐ป๐ด ๐ถ๐ ๐๐ป๐ฑ๐ฒ๐ฟ ๐๐ต๐ฟ๐ฒ๐ฎ๐
โ 76% of parents and 72% of teachers fear kids are becoming too trusting of GenAI outputs.
9. ๐๐ถ๐ฎ๐ ๐ฎ๐ป๐ฑ ๐ถ๐ฑ๐ฒ๐ป๐๐ถ๐๐ ๐ฟ๐ฒ๐ฝ๐ฟ๐ฒ๐๐ฒ๐ป๐๐ฎ๐๐ถ๐ผ๐ป ๐ถ๐ ๐๐๐ถ๐น๐น ๐ฎ ๐ฏ๐น๐ถ๐ป๐ฑ๐๐ฝ๐ผ๐
โ Children of color felt less seen and less motivated to use tools that didnโt reflect them. Representation matters.
The next generation isnโt just using AI. Theyโre being shaped by it. Thatโs why we need a more focused, intentional approach: Teaching them not just how to use these tools โ but how to question them. To navigate the benefits, the risks, and the blindspots.
๐ช๐ฎ๐ป๐ ๐บ๐ผ๐ฟ๐ฒ ๐ฏ๐ฟ๐ฒ๐ฎ๐ธ๐ฑ๐ผ๐๐ป๐ ๐น๐ถ๐ธ๐ฒ ๐๐ต๐ถ๐?
Subscribe to Human in the Loop โ my new weekly deep dive on AI agents, real-world tools, and strategic insights: https://lnkd.in/dbf74Y9E | 174 comments on LinkedIn
Understanding LLMs, RAG, AI Agents, and Agentic AI
I frequently see conversations where terms like LLMs, RAG, AI Agents, and Agentic AI are used interchangeably, even though they represent fundamentally different layers of capability.
This visual guides explain how these four layers relateโnot as competing technologies, but as an evolving intelligence architecture.
Hereโs a deeper look:
1. ๐๐๐ (๐๐ฎ๐ฟ๐ด๐ฒ ๐๐ฎ๐ป๐ด๐๐ฎ๐ด๐ฒ ๐ ๐ผ๐ฑ๐ฒ๐น)
This is the foundation. Models like GPT, Claude, and Gemini are trained on vast corpora of text to perform a wide array of tasks:
โ Text generation
โ Instruction following
โ Chain-of-thought reasoning
โ Few-shot/zero-shot learning
โ Embedding and token generation
However, LLMs are inherently limited to the knowledge encoded during training and struggle with grounding, real-time updates, or long-term memory.
2. ๐ฅ๐๐ (๐ฅ๐ฒ๐๐ฟ๐ถ๐ฒ๐๐ฎ๐น-๐๐๐ด๐บ๐ฒ๐ป๐๐ฒ๐ฑ ๐๐ฒ๐ป๐ฒ๐ฟ๐ฎ๐๐ถ๐ผ๐ป)
RAG bridges the gap between static model knowledge and dynamic external information.
By integrating techniques such as:
โ Vector search
โ Embedding-based similarity scoring
โ Document chunking
โ Hybrid retrieval (dense + sparse)
โ Source attribution
โ Context injection
โฆRAG enhances the quality and factuality of responses. It enables models to โrecallโ information they were never trained on, and grounds answers in external sourcesโcritical for enterprise-grade applications.
3. ๐๐ ๐๐ด๐ฒ๐ป๐
RAG is still a passive architectureโit retrieves and generates. AI Agents go a step further: they act.
Agents perform tasks, execute code, call APIs, manage state, and iterate via feedback loops. They introduce key capabilities such as:
โ Planning and task decomposition
โ Execution pipelines
โ Long- and short-term memory integration
โ File access and API interaction
โ Use of frameworks like ReAct, LangChain Agents, AutoGen, and CrewAI
This is where LLMs become active participants in workflows rather than just passive responders.
4. ๐๐ด๐ฒ๐ป๐๐ถ๐ฐ ๐๐
This is the most advanced layerโwhere we go beyond a single autonomous agent to multi-agent systems with role-specific behavior, memory sharing, and inter-agent communication.
Core concepts include:
โ Multi-agent collaboration and task delegation
โ Modular role assignment and hierarchy
โ Goal-directed planning and lifecycle management
โ Protocols like MCP (Anthropicโs Model Context Protocol) and A2A (Googleโs Agent-to-Agent)
โ Long-term memory synchronization and feedback-based evolution
Agentic AI is what enables truly autonomous, adaptive, and collaborative intelligence across distributed systems.
Whether youโre building enterprise copilots, AI-powered ETL systems, or autonomous task orchestration tools, knowing what each layer offersโand where it falls shortโwill determine whether your AI system scales or breaks.
If you found this helpful, share it with your team or network.
If thereโs something important you think I missed, feel free to comment or message meโIโd be happy to include it in the next iteration. | 119 comments on LinkedIn
BREAKING: Claude launches Education. Free learning is now much faster with AI:
1. Set clear learning goalsย
โณ Knowing what you want to learn makes it easier.
โณ Claude helps you define your path.
2. Provide context for your knowledgeย
โณ Understanding the bigger picture is key.ย
โณ Claude connects new ideas to what you already know.
3. Request detailed explanationsย
โณ Sometimes, you need more than a quick answer.ย
โณ Claude can dive deep into complex topics.
4. Get real-world examplesย
โณ Learning is better with practical applications.ย
โณ Claude shows how concepts work in the real world.
5. Practice writing and receive feedbackย
โณ Writing helps solidify your knowledge.ย
โณ Claude gives instant feedback to improve your skills.
6. Role-play for languages or codingย
โณ Learning by doing is effective.ย
โณ Claude can simulate conversations or coding scenarios.
7. Fact-check surprising claimsย
โณ Misinformation is everywhere.ย
โณ Claude helps you verify facts and claims.
8. Take breaks and reflect on learningย
โณ Reflection is vital for understanding.ย
โณ Claude reminds you to pause and think.
9. Keep a learning journalย
โณ Tracking your progress is important.ย
โณ Claude can help you log your journey.
10. Iterate and refine understandingย
โณ Learning is a process.ย
โณ Claude encourages you to improve your knowledge.ย | 246 comments on LinkedIn
New research shows that your learners arenโt using AI to cheat - theyโre using it to redesign your courses...
Despite our obsession with AI's impact on "academic integrity," two recent analyses show that rather than asking AI for answers, learners are much more likely to use AI to redesign the learning experience in an attempt to learn more.
Common strategies include asking AI to apply the protรฉgรฉ effect, using AI to apply the Pareto principle and enhancing levels of emotional metacognition within a learning experience, in the process redesigning the experience sometimes beyond recognition.
The uncomfortable truth? Learners are effectively running a real-time audit of our design decisions, processes & practicesโand as instructional designers, we don't come out too well.
In this week's blog post, I explore what learner + AI behaviour reveals about our profession and how we might turn this into an opportunity for innovation in instrucitonal design practices and principles.
Check out the full post using the link in comments.
Happy innovating!
Phil ๐ | 16 comments on LinkedIn
Die Pflichtlektรผre zum Sonntag: Mary Meeker hat einen ihrer legendรคren Reports gedropped... Nach den jรคhrlichen "Internet Trends" nun ein 340 Seiten Brett ihrer Investment Firma Bond Capital ganz zum Thema AI.
Superbes Gedankenfutter hinsichtlich u.a.:
1. Nutzerwachstum und Verbreitung
โข ChatGPT erreichte 800 Millionen wรถchentliche Nutzer in nur 17 Monaten
โข Verbreitung auรerhalb Nordamerikas liegt bei 90 Prozent โ nach nur 3 Jahren
โข Vergleich: Das Internet brauchte dafรผr 23 Jahre
โข KI-Anwendungen skalieren global nahezu gleichzeitig
2. Investitionen und Infrastruktur
โข Big Tech (Apple, Microsoft, Google, Amazon, Meta, Nvidia) investiert รผber 212 Milliarden Dollar CapEx pro Jahr
โข KI wird zur neuen Infrastruktur โ vergleichbar mit Strom oder Internet
โข Rechenzentren werden zu produktiven "KI-Fabriken"
3. Entwickler-รkosysteme explodieren
โข Google Gemini: 7 Millionen aktive Entwickler, +500 Prozent in 12 Monaten
โข NVIDIA-รkosystem: 6 Millionen Entwickler, +6x in sieben Jahren
โข Open Source spielt zunehmend eine Schlรผsselrolle, auch in China
4. Technologischer Fortschritt beschleunigt sich exponentiell
โข 260 Prozent Wachstum pro Jahr bei Trainingsdatenmengen
โข 360 Prozent Wachstum pro Jahr beim Compute-Aufwand fรผr Modelltraining
โข Bessere Algorithmen fรผhren zu 200 Prozent Effizienzsteigerung pro Jahr
โข Fortschritte bei Supercomputern ermรถglichen +150 Prozent Leistungszuwachs jรคhrlich
5. Monetarisierung ist real โ aber teuer
โข OpenAI mit starkem Nutzerwachstum, aber weiterhin Milliardenverluste
โข Compute-Kosten steigen, Inferenzkosten pro Token sinken
โข Monetรคre Skalierung bleibt herausfordernd und kompetitiv
6. Arbeit und Gesellschaft verรคndern sich sichtbar
โข IT-KI-Stellen in den USA: +448 Prozent seit 2018
โข Nicht-KI-IT-Stellen: โ9 Prozent
โข Erste autonome Taxis nehmen Marktanteile in Stรคdten wie San Francisco
โข KI-Scribes in der Medizin reduzieren administrativen Aufwand massiv
7. Wissen und Kommunikation erleben ein neues Zeitalter
โข Nach Buchdruck und Internet folgt die รra der generativen Wissensverbreitung
โข Generative KI verรคndert, wie wir Wissen erzeugen, verbreiten und nutzen
โข Anwendungen wie ElevenLabs oder Spotify รผbersetzen Stimmen in Echtzeit, global skalierbar
8. Geopolitik wird zur KI-Strategie
โข USA und China investieren aggressiv in souverรคne KI-Modelle
โข Wer KI-Infrastruktur dominiert, definiert รถkonomische und politische Macht neu
โข Fรผhrende CTOs sprechen offen von einem neuen "Space Race"
9. Chancen und Risiken sind gewaltig
โข KI kann medizinische Forschung, Bildung und Kreativitรคt beflรผgeln
โข Gleichzeitig drohen Kontrollverlust, Missbrauch, Arbeitsplatzverdrรคngung, ethische Dilemmata
Meinungen? Evangelos Papathanassiou Christian Herold Thorsten Muehl Christoph Deutschmann Constance Stein Rebecca Schalber Sandy Brueckner Dirk Hofmann Henning Tomforde Dr. Paul Elvers Katharina Neubert Laura Seiffe Ekaterina Schneider
In 2025, the AI landscape has evolved far beyond just large language models. Knowing which model to use for your specific use case โ and how โ is becoming a strategic advantage.
Letโs break down theย 8 most important model typesย and what theyโre actually built to do: โฌ๏ธ
1. ๐๐๐ โ ๐๐ฎ๐ฟ๐ด๐ฒ ๐๐ฎ๐ป๐ด๐๐ฎ๐ด๐ฒ ๐ ๐ผ๐ฑ๐ฒ๐น
โ Your ChatGPT-style model.
Handles text, predicts the next token, and powers 90% of GenAI hype.
๐ Use case: content, code, convos.
2. ๐๐๐ โ ๐๐ฎ๐๐ฒ๐ป๐ ๐๐ผ๐ป๐๐ถ๐๐๐ฒ๐ป๐ฐ๐ ๐ ๐ผ๐ฑ๐ฒ๐น
โ Lightweight, diffusion-style models.
Fast, quantized, and efficient โ perfect for real-time or edge deployment.
๐ Use case: image generation, optimized inference.
3. ๐๐๐ โ ๐๐ฎ๐ป๐ด๐๐ฎ๐ด๐ฒ ๐๐ฐ๐๐ถ๐ผ๐ป ๐ ๐ผ๐ฑ๐ฒ๐น
โ Where LLM meets planning.
Adds memory, task breakdown, and intent recognition.
๐ Use case: AI agents, tool use, step-by-step execution.
4. ๐ ๐ผ๐ โ ๐ ๐ถ๐ ๐๐๐ฟ๐ฒ ๐ผ๐ณ ๐๐ ๐ฝ๐ฒ๐ฟ๐๐
โ One model, many minds.
Routes input to the right โexpertโ model slice โ dynamic, scalable, efficient.
๐ Use case: high-performance model serving at low compute cost.
5. ๐ฉ๐๐ โ ๐ฉ๐ถ๐๐ถ๐ผ๐ป ๐๐ฎ๐ป๐ด๐๐ฎ๐ด๐ฒ ๐ ๐ผ๐ฑ๐ฒ๐น
โ Multimodal beast.
Combines image + text understanding via shared embeddings.
๐ Use case: Gemini, GPT-4o, search, robotics, assistive tech.
6. ๐ฆ๐๐ โ ๐ฆ๐บ๐ฎ๐น๐น ๐๐ฎ๐ป๐ด๐๐ฎ๐ด๐ฒ ๐ ๐ผ๐ฑ๐ฒ๐น
โ Tiny but mighty.
Designed for edge use, fast inference, low latency, efficient memory.
๐ Use case: on-device AI, chatbots, privacy-first GenAI.
7. ๐ ๐๐ โ ๐ ๐ฎ๐๐ธ๐ฒ๐ฑ ๐๐ฎ๐ป๐ด๐๐ฎ๐ด๐ฒ ๐ ๐ผ๐ฑ๐ฒ๐น
โ The OG foundation model.
Predicts masked tokens using bidirectional context.
๐ Use case: search, classification, embeddings, pretraining.
8. ๐ฆ๐๐ โ ๐ฆ๐ฒ๐ด๐บ๐ฒ๐ป๐ ๐๐ป๐๐๐ต๐ถ๐ป๐ด ๐ ๐ผ๐ฑ๐ฒ๐น
โ Vision model for pixel-level understanding.
Highlights, segments, and understands *everything* in an image.
๐ Use case: medical imaging, AR, robotics, visual agents.
Understanding these distinctions is essentialย for selecting the right model architecture for specific applications, enabling more effective, scalable, and contextually appropriate AI interactions.
While these are some of the most prominent specialized AI models, there are many more emerging across language, vision, speech, and robotics โ each optimized for specific tasks and domains.
LLM, VLM, MoE, SLM, LCMย โ GenAI
LAM, MLM, SAMย โ Not classic GenAI, butย critical building blocksย for AI agents, reasoning, and multimodal systems
๐ ๐ฒ๐ ๐ฝ๐น๐ผ๐ฟ๐ฒ ๐๐ต๐ฒ๐๐ฒ ๐ฑ๐ฒ๐๐ฒ๐น๐ผ๐ฝ๐บ๐ฒ๐ป๐๐ โ ๐ฎ๐ป๐ฑ ๐๐ต๐ฎ๐ ๐๐ต๐ฒ๐ ๐บ๐ฒ๐ฎ๐ป ๐ณ๐ผ๐ฟ ๐ฟ๐ฒ๐ฎ๐น-๐๐ผ๐ฟ๐น๐ฑ ๐๐๐ฒ ๐ฐ๐ฎ๐๐ฒ๐ โ ๐ถ๐ป ๐บ๐ ๐๐ฒ๐ฒ๐ธ๐น๐ ๐ป๐ฒ๐๐๐น๐ฒ๐๐๐ฒ๐ฟ. ๐ฌ๐ผ๐ ๐ฐ๐ฎ๐ป ๐๐๐ฏ๐๐ฐ๐ฟ๐ถ๐ฏ๐ฒ ๐ต๐ฒ๐ฟ๐ฒ ๐ณ๐ผ๐ฟ ๐ณ๐ฟ๐ฒ๐ฒ: https://lnkd.in/dbf74Y9E
Kudos for the graphic goes to Generative AI ! | 45 comments on LinkedIn
Die 3 Level der KI-Integration - AI-Adoption, AI-Adaption & AI-Transformation
๐ฅ Adoption, Adaption, Transformation: Wie KI unsere Arbeitswelt verรคndert! ๐คโจ๐ง In diesem Video tauchen wir in die Welt der KI-Integration ein und beleucht...
๐ฅ๐ฒ๐ถ๐ฐ๐ต๐๐ฒ๐ถ๐๐ฒ๐ป-๐.๐ข. ๐ณรผ๐ฟ ๐๐ถ๐ฒ๐น๐ฒ ๐ช๐ฒ๐ฏ๐๐ถ๐๐ฒ๐. Erst wandern Suchanfragen von Google zu ChatGPT - jetzt beantwortet sie Google direkt in den AI-Overviews.
๐จ Studien zeigen bereits hohe Traffic-Rรผckgรคnge.
ย
Was kรถnnen Redakteure und Publisher tun?
ย
๐ Deshalb bin ich mit Matthรคus Michalik in den Podcast-Ring gestiegen:
ย
Wir haben 2 Folgen aufgenommen: ๐๐๐ข ๐๐๐ฎ๐๐ ๐ฆ๐๐ข & ๐ช๐ถ๐ฒ ๐ฝ๐น๐ฎ๐๐๐ถ๐ฒ๐ฟ๐ฒ๐ป ๐๐ถ๐ฟ ๐๐ป๐ ๐ถ๐ป ๐ฑ๐ฒ๐ป ๐๐-๐ข๐๐ฒ๐ฟ๐๐ถ๐ฒ๐?
ย
Als Teaser fรผr euch:
4๏ธโฃ Sofort-Tipps fรผr GEO (Generative Engine Optimization)
1. Autoritรคt & Vertrauen belegen
๐ธ Quellen, Zitate und fachliche Referenzen explizit nennen.
๐ธErgebnis: bis zu +40 % hรถhere Wahrscheinlichkeit, in KI-Antworten zitiert zu werden.
2. Zahlen sprechen lassen
๐ธStatistiken, Studien-Daten und eigene Benchmarks einbauen.
๐ธKI-Modelle gewichten quantitative Infos stรคrker โ +30 % Relevanz-Boost.
3. Klare Struktur, einfache Sprache
๐ธKurze Absรคtze, Bullet-Points, FAQs, sprechende Zwischenยญรผberschriften.
๐ธErleichtert Parsing durch LLMs und erhรถht die Chance auf direkte รbernahme.
4. Gezielter Fachwort-Einsatz
๐ธRelevante Terminologie und Branchen-Jargon bewusst einstreuen.
๐ธSignalisiert Expertise und verbessert das Matching fรผr spezifische Nutzerยญanfragen.
โผ๏ธ Kurzformel: Autoritรคt + Daten + Klarheit + Terminologie = Sichtbarkeit Chat-Antworten.
ย
๐ฆ๐ถ๐ฐ๐ต๐๐ฏ๐ฎ๐ฟ๐ธ๐ฒ๐ถ๐ ๐ถ๐ป ๐๐ ๐ข๐๐ฒ๐ฟ๐๐ถ๐ฒ๐๐ โ ๐ฑ๐ฎ๐ ๐บ๐๐๐๐ ๐ฑ๐ ๐ฏ๐ฒ๐ฎ๐ฐ๐ต๐๐ฒ๐ป
๐ธGrundvoraussetzung: Deine Seite muss im Google-Index stehen und bereits ein gewisses Vertrauensniveau besitzen. Dann gilt:
๐ธHochwertige, faktenbasierte Inhalte: prรคzise, recherchiert, aktuell.
๐ธKlare Struktur: H-รberschriften, Listen, Tabellen โ erleichtert Parsing.
๐ธStrukturierte Daten (Schema.org): zeigt der KI, was welche Bedeutung hat.
๐ธUX & Performance: schnelle Ladezeiten, sauberes Mobile-Design.
๐ธE-E-A-T pflegen: Expertise, Erfahrung, Autoritรคt, Vertrauen kontinuierlich belegen (Autorenยญprofile, Quellen, Backlinks).
๐ด ๐ฃ๐ฟ๐ฎ๐ ๐ถ๐-๐ง๐ถ๐ฝ๐ฝ๐ ๐ณรผ๐ฟ ๐ฑ๐ถ๐ฒ ๐ฃ๐ผ๐๐-๐ฆ๐๐ข-ร๐ฟ๐ฎ
โ๏ธ Qualitรคt vor Quantitรคt โ fewer, deeper pieces mit klarer Expertise.
โ๏ธStruktur first โ H-Tags, Bullet-Points, FAQ-Blรถcke, Schema.
โ๏ธUser Experience optimieren โ Speed, Navigation, mobile UX.
โ๏ธMehrwert รผber die KI hinaus โ eigene Daten, Cases, Meinungen.
โ๏ธTraffic-Quellen streuen โ Social, E-Mail, Communities, Partnerschaften.
โ๏ธMonitoring & Anpassung โ beobachte, welche Seiten in AI Overviews landen, und iteriere.
โ๏ธMultimedial denken โ Videos, Podcasts, Infografiken ergรคnzen Text.
โ๏ธE-E-A-T kontinuierlich stรคrken โ Fachautor:innen, Referenzen, Reviews, Backlinks.
๐๐๐ฟ๐ยญ๐ณ๐ผ๐ฟ๐บ๐ฒ๐น: Qualitรคt + Struktur + Mehrwert + Vertrauen + Channel-Mix = langfristige Sichtbarkeit โ auch in der KI-Suche.
โ Wie geht ihr den Battle um Sichtbarkeit und Traffic an? Lasst uns diskutieren. ๐ | 12 Kommentare auf LinkedIn
Wenn Du nochmal bei 0 starten kรถnntest, wie wรผrdest du eine Daten- und KI-Organisation aufbauen?
Genau das wollte ich von Claudia Pohlink wissen, die eine beeindruckende Karriere in der Daten- und KI-Welt bei Telekom, Deutsche Bahn und FIEGE hingelegt hat.
Also, wie sieht der Anti-Hype Blueprint aus?
๐ญ. ๐ฆ๐๐ฎ๐บ๐บ๐ฑ๐ฎ๐๐ฒ๐ป ๐ฑ๐ฒ๐ณ๐ถ๐ป๐ถ๐ฒ๐ฟ๐ฒ๐ป ๐๐ป๐ฑ ๐๐๐ฟ๐๐ธ๐๐๐ฟ๐ถ๐ฒ๐ฟ๐ฒ๐ป
Starte mit der Definition deiner Kerndomรคnen und Stammdaten. Bestimme fรผhrende Systeme fรผr jede Datendomรคne, bevor du Tools auswรคhlst. Dies schafft ein stabiles Fundament fรผr alle KI-Aktivitรคten.
๐ฎ. ๐๐ฟ๐๐๐ฒ ๐๐ฟ๐ณ๐ผ๐น๐ด๐๐ด๐ฒ๐๐ฐ๐ต๐ถ๐ฐ๐ต๐๐ฒ ๐๐ฐ๐ต๐ฟ๐ฒ๐ถ๐ฏ๐ฒ๐ป
Identifiziere einen ersten Use Case, zum Beispiel mit dem Controlling-Bereich, wo bereits Datenaffinitรคt besteht. Zeige schnelle Erfolge, um Management-Support zu gewinnen.
๐ฏ. ๐๐ฎ๐ ๐ฏ-๐๐ฎฬ๐๐๐ฒ๐ฟ-๐ ๐ผ๐ฑ๐ฒ๐น๐น ๐ถ๐บ๐ฝ๐น๐ฒ๐บ๐ฒ๐ป๐๐ถ๐ฒ๐ฟ๐ฒ๐ป
โข House of Data: Grundlagen, Governance, Architektur
โข House of AI: Use Cases, Data Scientists, Engineers
โข House of 3C: Change, Communication, Community
Diese 3 Bereiche sollten zu gleichen Teilen aufgebaut werden. Keiner kann ohne den anderen fรผr nachhaltige Daten- und KI-Implementierung. Die Leads sollten zu Beginn intern aufgebaut werden, extern kรถnnen operative Ressourcen zugekauft werden.
๐ฐ. ๐๐ฎ๐น๐ฎ๐ป๐ฐ๐ฒ ๐๐๐ถ๐๐ฐ๐ต๐ฒ๐ป ๐๐ฒ๐ป๐๐ฟ๐ฎ๐น ๐๐ป๐ฑ ๐ฑ๐ฒ๐๐ฒ๐ป๐๐ฟ๐ฎ๐น ๐ณ๐ถ๐ป๐ฑ๐ฒ๐ป
Etabliere zentrale Standards und Koordination, befรคhige aber gleichzeitig dezentrale Teams durch Multiplikatoren-Ideen wie KI-Awards, Schulungen und Hackathons. Laut Claudia ist diese Balance eine der grรถรten Herausforderungen in der Umsetzung.
๐ฑ. ๐ฃ๐ฟ๐ฎ๐ด๐บ๐ฎ๐๐ถ๐๐ฐ๐ต ๐ฝ๐น๐ฎ๐ป๐ฒ๐ป ๐๐๐ฎ๐๐ ๐๐ต๐ฒ๐ผ๐ฟ๐ฒ๐๐ถ๐๐ถ๐ฒ๐ฟ๐ฒ๐ป
Erstelle 6-12-Monats-Plรคne statt langfristiger Strategien. Dokumentiere Erfahrungen systematisch, auch Misserfolge, und passe deine Plรคne regelmรครig an.
Ich weiร, wie viele Mittelstรคndler vor der groรen Aufgabe stehen, Daten- und KI-Kompetenzen und Strukturen im Unternehmen aufzubauen.
Claudia's Erfahrungen sind eine echte Schatzkiste.
Ganz ohne Buzzwords, Hype oder Selbstprofilierung.
Claudia, 1000 Dank fรผr deine Offenheit und dass du uns an deinen Erfahrungen teilhaben lรคsst!
Was sagt ihr zum Blueprint? | 22 Kommentare auf LinkedIn
Mit seinem AI Mode und dem Agenten Mariner zieht Google eine Plattformschicht รผber das offene Web. Google transformiert sich von einer klassischen Suchmaschine zum zentralen Marktplatz, Assistenten und Zahlungsdienstleister. Nutzer kรถnnen kรผnftig Produkte direkt in der Google-Suche finden, vergleichen, kaufen und bezahlen โ ohne die Plattform zu verlassen.
Diese Entwicklung hat weitreichende Folgen fรผr das gesamte Internet-รkosystem. Die Auswirkungen treffen nicht nur klassische Online-Hรคndler, sondern auch Marktplatzgiganten wie Amazon, Verlage, รbersetzungsdienste wie DeepL, Reservierungsanbieter wie OpenTable, Buchungsseiten wie Ticketmaster oder Sprachschulen wie Duolingo.
Wer weiterhin sichtbar und relevant bleiben will, muss sich auf die neuen Spielregeln einstellen, in KI-Overviews und Shopping-Graphen prรคsent sein und seine Inhalte fรผr KI-Systeme optimieren. Denn OpenAI baut etwas รhnliches auf und auch Amazon bewegt sich in diese Richtung. Der Wettstreit der Plattformen ist damit endgรผltig im KI-Zeitalter angekommen.
Weiterlesen auf F.A.Z. PRO Digitalwirtschaft (โฌ) โถ๏ธ https://lnkd.in/e-r8k7upโฌ
Frankfurter Allgemeine Zeitung
In a new paper, British philosopher Andy Clark (author of the 2003 book Natural Born Cyborgs, see comment below) offers a rebuttal to the pervasive anxiety surrounding new technologies, particularly generative AI, by reframing the nature of human cognition.
In a new paper, British philosopher Andy Clark (author of the 2003 book Natural Born Cyborgs, see comment below) offers a rebuttal to the pervasive anxiety surrounding new technologies, particularly generative AI, by reframing the nature of human cognition. He begins by acknowledging familiar concerns: that GPS erodes our spatial memory, search engines inflate our sense of knowledge, and tools like ChatGPT might diminish creativity or encourage intellectual laziness. These fears, Clark observes, mirror ancient worries, like Platoโs warning that writing would weaken memory, and stem from a deeply ingrained but flawed assumption: the idea that the mind is confined to the biological brain.
Clark challenges this perspective with his extended mind thesis, arguing that humans have always been cognitive hybrids, seamlessly integrating external tools into our thinking processes. From the gestures we use to offload mental effort to the scribbled notes that help us untangle complex problems, our cognition has never been limited to what happens inside our skulls. This perspective transforms the debate about AI from a zero-sum game, where technology is seen as replacing human abilities, into a discussion about how we distribute cognitive labour across a network of biological and technological resources.
Recent advances in neuroscience lend weight to this view. Theories like predictive processing suggest that the brain is fundamentally geared toward minimising uncertainty by engaging with the world around it. Whether probing a riverโs depth with a stick or querying ChatGPT to clarify an idea, the brain doesnโt distinguish between internal and external problem-solvingโit simply seeks the most efficient path to resolution. This fluid interplay between mind and tool has shaped human history, from the invention of stone tools to the design of modern cities, each innovation redistributing cognitive tasks and expanding what we can achieve.
Generative AI, in Clarkโs view, is the latest chapter in this story. While critics warn that it might stifle originality or turn us into passive curators of machine-generated content, evidence suggests a more nuanced reality. The key, Clark argues, lies in how we integrate these technologies into our cognitive ecosystems.
https://lnkd.in/gUmxE57w | 41 comments on LinkedIn
Microsoft Build 2025 Keynote: Everything Revealed, in 14 Minutes
Watch CEO Satya Nadella unveil all the biggest product moves, including Copilot and Azureย updates, developer tools, and more, from Seattle.0:00 Intro0:14 Bui...
๐๐ ๐/๐ข 2025, Google ๐๐ต๐ผ๐๐ฒ๐ฑ ๐๐ ๐๐ต๐ฎ๐ ๐๐-๐ณ๐ถ๐ฟ๐๐โฆ | Andreas Horn | 61 comments
๐๐ ๐/๐ข 2025, Google ๐๐ต๐ผ๐๐ฒ๐ฑ ๐๐ ๐๐ต๐ฎ๐ ๐๐-๐ณ๐ถ๐ฟ๐๐ ๐ฅ๐๐๐๐๐ฌ ๐บ๐ฒ๐ฎ๐ป๐. ๐๐ฒ๐ฟ๐ฒโ๐ ๐๐ต๐ฎ๐ ๐๐ผ๐ผ๐ด๐น๐ฒ ๐ฎ๐ป๐ป๐ผ๐๐ป๐ฐ๐ฒ๐ฑ: โฌ๏ธ
The company's flagship developer event Google I/O 2025 was held last night in Mountain View, California.
๐ง๐๐๐ฅ: Google is turning Gemini into the AI operating system for everything โ with agents now embedded across Search, Chrome, Workspace, Android, and more.
If you donโt have time for the full event, hereโs a curated ๐๐๐ฝ๐ฒ๐ฟ๐ฐ๐๐ of the highlights that really matter.
๐๐ฒ๐ ๐บ๐ผ๐บ๐ฒ๐ป๐๐ ๐ณ๐ฟ๐ผ๐บ ๐๐ผ๐ผ๐ด๐น๐ฒ ๐/๐ข ๐ฎ๐ฌ๐ฎ๐ฑ:
0:00 ๐๐ป๐๐ฟ๐ผ โ AI-native from the ground up
0:11 ๐๐ฒ๐บ๐ถ๐ป๐ถ ๐ฝ๐น๐ฎ๐๐ ๐ฎ ๐ฃ๐ผ๐ธ๐ฒ๐บ๐ผ๐ป ๐ด๐ฎ๐บ๐ฒ โ memory, reasoning, and code
0:30 ๐๐ผ๐ผ๐ด๐น๐ฒ ๐๐ฒ๐ฎ๐บ โ Real-time 3D video chat with AI
1:08 ๐๐ผ๐ผ๐ด๐น๐ฒ ๐ ๐ฒ๐ฒ๐ โ Speech-to-speech translation, live
1:27 ๐ฃ๐ฟ๐ผ๐ท๐ฒ๐ฐ๐ ๐ ๐ฎ๐ฟ๐ถ๐ป๐ฒ๐ฟ โ AI agents that book, plan, filter, decide
2:07 ๐ฃ๐ฒ๐ฟ๐๐ผ๐ป๐ฎ๐น ๐๐ผ๐ป๐๐ฒ๐ ๐ โ Gemini gets memory and task awareness
2:40 ๐๐ฒ๐บ๐ถ๐ป๐ถ ๐ฎ.๐ฑ ๐ฃ๐ฟ๐ผ + ๐๐น๐ฎ๐๐ต โ New SOTA models, LMArena leader
4:57 ๐ฃ๐ฟ๐ผ๐ท๐ฒ๐ฐ๐ ๐๐๐๐ฟ๐ฎ โ Multimodal, fast-response agent that sees and hears
5:32 ๐๐ ๐ ๐ผ๐ฑ๐ฒ โ Overlay for restaurants, bookings, prices, events
7:10 ๐ฆ๐ต๐ผ๐ฝ๐ฝ๐ถ๐ป๐ด โ Track, compare, and auto-buy with Google Pay
8:34 ๐๐ฒ๐บ๐ถ๐ป๐ถ ๐๐ถ๐๐ฒ โ Screen sharing + live AI guidance
8:59 ๐๐ฒ๐ฒ๐ฝ ๐ฅ๐ฒ๐๐ฒ๐ฎ๐ฟ๐ฐ๐ต ๐๐ด๐ฒ๐ป๐ โ Upload files, get insights
9:12 ๐๐ฎ๐ป๐๐ฎ๐ โ Live, collaborative AI whiteboard
9:31 ๐๐ฒ๐บ๐ถ๐ป๐ถ ๐ถ๐ป ๐๐ต๐ฟ๐ผ๐บ๐ฒ โ AI understands and acts on any webpage
9:51 ๐๐บ๐ฎ๐ด๐ฒ๐ป ๐ฐ โ Next-gen image generation
10:23 ๐ฉ๐ฒ๐ผ ๐ฏ โ Ultra-realistic video model
11:01 ๐๐๐ฟ๐ถ๐ฎ ๐ฎ โ AI-powered music composition
11:56 ๐๐น๐ผ๐๐ โ Multimodal, promptable AI video creation
12:39 ๐๐ป๐ฑ๐ฟ๐ผ๐ถ๐ฑ ๐ซ๐ฅ โ AI-first spatial computing
12:57 ๐ฆ๐ฎ๐บ๐๐๐ป๐ด ๐ ๐ผ๐ผ๐ต๐ฎ๐ป โ Googleโs XR headset revealed
13:16 ๐๐ถ๐๐ฒ ๐ด๐น๐ฎ๐๐๐ฒ๐ ๐ฑ๐ฒ๐บ๐ผ โ Gemini + XR = real-time AI overlay
Super insightful and forward-looking: Googleโs AI strategy just went full stack. Even if some of these projects donโt make it past the prototype stage, the direction is obvious: AI is being integrated into everything. LLMs โ Gemini, in this case โ are rapidly becoming the new operating system and everything will be powered by AI Agents across all products.
Full keynote: https://lnkd.in/dPFFtyZ9
Supercut: https://lnkd.in/d-eBNGjw
Enjoy watching! | 61 comments on LinkedIn
Recent research showed that every 7 months AI doubles the length (in human time taken) of the task they can solve. AI researcher Toby Ord has built on the original study to show that AI success probability declines exponentially with task length, defining model capabilities with a โhalf-life.โ
One of the most interesting things about the original research is that it provides a clear metric for measuring AI performance improvement that is not tied to benchmarks that keep on being superceded, needing new benchmarks.
We can now rank AI models and agents by their half-life - the time for human tasks for which they achieve 50% success rate.
Of course we are usually more interested in models that can achieve 99+% success rates - depending on the task - but the relative consistency of the half life decay means the T50 threshold predicts whatever success rate we aim for, both today, and at future dates if the original trend holds
Generally the decay is due to cumulative errors or going off course. But the decay is not always consistent, as there can be subtasks of uneven difficulty, or agents can recover from early mistakes.
Interestingly, humans don't follow pure exponential decay curves. Our success rate falls off more slowly over very long tasks, suggesting we have broader context, allowing us to recover from early mistakes.
The research was applied to tasks in research or software engineering. The dynamics of this performance evolution may or may not apply to other domains.
Certainly, this reframing of assessing the development of AI capabilities and its comparison to human work is a very useful advance to the benchmarking approach.
HR + IT; The Future of Work? That question has been on my mind since I first read about Moderna merging its HR and Tech departments. They are redefining what it means to be a future-ready company.
Hereโs what I take away:
๐ซ HR is no longer just about people.
๐ซ IT is no longer just about systems.
โ The real value lies in how people and systems interactโseamlessly, intelligently, adaptively.
Letโs be honest, most organizations still operate in silos:
- HR builds talent and culture
- IT builds systems and infrastructure
But the future of work is all about integration. What if you make that happen?
Think about it: Can you redesign work itself?
Not roles. Not org charts. But the actual FLOW of work.
Because thatโs what Modernaโs doing. They are reimagining how humans and machines co-create value. IBM is doing the same. They use HR AI agents that handle questions, routes issues, and manage HR processes.
This isnโt about cutting costs. Itโs about building a business that adapts faster to the next disruption. They are building resilience.
I recognize that HR and IT both have unique complexities, and in many companies are simply too far apart or too large merge shortly.
Still, it still got me thinking. As an HR leader:
-> How comfortable are you with data, automation, and AI?
-> Could you confidently lead both people strategy and digital infrastructure?
-> What would need to change for that answer to be yes?
This isnโt a tech conversation.
Itโs an organization and leadership revolution.
The next era of HR wonโt be like today's HR at all. It will be integrated, tech-savvy, and central to how business gets done.
Time to level up. Are you ready?
#futureofwork #hrtech #ai
Picture and story credits: Isabelle Bousquette ๐ | 34 comments on LinkedIn
Research on over 3500 workers points to two outcomes from use of GenAI: immediate performance boosts, and a decrease in motivation and increase in boredom whenโฆ
switching to non-augmented tasks.
It is definitely interesting research, but I am very cautious about the conclusions reached by the authors, partly since they are to a degree contradictory, and also not necessarily generalizable.
The authors implicitly criticize AI for removing the โmost cognitively demanding partsโ of work, implying that this reduces fulfillment. But the outputs and productivity are clearly improved. Are they suggesting workers create inferior output for the sake of engagement?
It is worth noting that other recent research points to improved emotion and engaement with genAI collaboration. The emotional impact of genAI collaboration will vary substantially across use cases, especially with the nature of the task, and certainly with the cultural context. It appears the use case here was performance reviews, which is not representative of many other types of cognitive work.
The authors also say that AI-assisted tasks reduce usersโ sense of control, thus lowering motivation. But they say this sense of control is restored during subsequent solo tasks, even though those are when boredom and disengagement rise.
Having said that, for some tasks and work design the issues they raise could be real and substantial. These are the sound remedies they suggest:
โก๏ธBlend AI and Human Contributions:
Use gen AI as a foundation for tasks while encouraging humans to personalize, expand, and refine outputs to retain creativity and ownership.
โก๏ธDesign Engaging Solo Tasks:
Follow AI-supported work with autonomous, creative tasks to help employees stay motivated and exercise their own skills.
โก๏ธMake AI Collaboration Transparent:
Clearly communicate AIโs supporting role to preserve employeesโ sense of control and fulfillment in their contributions.
โก๏ธRotate Between Tasks:
Alternate between independent and AI-assisted tasks to maintain engagement and productivity throughout the workday.
โก๏ธTrain Employees to Use AI Mindfully:
Provide training that helps employees critically and strategically integrate AI, strengthening their autonomy and judgment.
Best AI Tools for Deep Research (Ranked by a PhD, Not Hype)
Today, Iโm diving into the world of deep research tools to find out which platforms are truly the most helpful for academic work. โผ โฝ Sign up for my FREE new...
To stop playing catch-up and stay ahead of AI, we need to form a point of view on the future of work. A POV on FOW, if you will.
There is a lot of talk about how L&D needs to be proactive, not reactive. But how do we do that when technology is moving so fast?
It starts with having a point of view on where the world of work is headed, and then building a bridge to that future. Because if we only make incremental changes from where we are now, we'll likely be playing catch-up for a long timeโand risk preparing people for the work of today, not tomorrow.
Here are some of the forces I think about a lot these days:
๐ AI seems to be denting the supply of entry level jobs. What does that mean for the talent pipeline later down the line? And how should we onboard the graduates that *do* get employed so they can add value on top of AI?
๐ AI gets lower performers closer to higher performers (HBS & BCG study), and individuals working with AI match the performance of *teams* without AI (HBS & P&G study). How do we evaluate, recognise and enhance expertise in such a world?
๐ Vibe coding/marketing/learning/something else, single founder unicorns, service-as-a-software (not software-as-a-service!) and zero latency economy are just some of the predictions that would affect both the nature and pace of work. What support would our people and organisations need to adapt?
L&D isn't short on AI tools. What we need is a visionโto imagine how AI will reshape performance, learning, and the world of work at large. And, ultimately, what L&D needs to ๐ฃ๐ฆ๐ค๐ฐ๐ฎ๐ฆ to have a role in it.
Nodes #AI #HR #Learning #Talent #FutureOfWork | 12 comments on LinkedIn
In their โthousand flowersโ strategy J&J seeded 900+ GenAI use cases. Using clear metrics they found that 10โ15% of these drove 80% of the value, and pivoted to focusing on fewer scalable, high-impact use cases.
In my work with boards and exec teams one of the pointed questions is always the degree of focus in AI initiatives. Johnson & Johnson's divergent-convergent strategy is highly instructive.
Some commentators have suggested that this means the use case proliferation was a mistake. J&J's CIO doesn't see it like that.
"You had to take an iterative approach to say, โWhere are these technologies useful and where are they not?โ... We had the right plan three years ago, but we matured our plan based on three years of understanding,โ
Leaders cannot know in advance where the value will emerge. The challenge is to select the right scope of experimenation before selecting focus use cases.
Another shift was from centralized AI by a board governance to function-specific ownership such as commercial, R&D, and supply chain, enabling better prioritization and faster iteration.
Again, these models suit different phases of the AI adoption journey. Most organizations are far earlier than J&J, which has strong maturity.
On metrics:
"The company is tracking progress in three buckets: first, the ability to successfully deploy and implement use cases; second, how widely they are adopted; and third, the extent to which they deliver on business outcomes."
I strongly suspect that they are not using a "win rate" on their use case success. There are similarities to VC portfolios, where a few big wins make all the investments worthwhile. | 12 comments on LinkedIn
3.000 KI-Assistenten integriert in alle Teams. Das ist die KI-Reise vonโฆ | Felix Schlenther | 12 Kommentare
3.000 KI-Assistenten integriert in alle Teams.
Das ist die KI-Reise von Moderna:
โItโs hard to conveyโwithin the hypeโhow much AI is changing things and how much Moderna is using it across the boardโ
Dieses Zitat von Wade Davis, Modernas Head of Digital for Business, zeigt sehr schรถn wie schwer der allumfassende Wandel von KI zu beschreiben ist.
Es sind eben nicht 2 - 3 Use Cases ein ein paar Bereichen. Viel mehr geht es um eine Verรคnderung der Denk- und Arbeitsweise.
Wรคhrend viele Unternehmen noch zรถgern, hat Moderna bereits konkrete Schritte unternommen, um KI strategisch zu implementieren:
1. Zusammenlegung von HR und IT unter einer Fรผhrung
2. Systematische Analyse aller Arbeitsprozesse
3. Klare Entscheidung: Was macht Mensch & Maschine?
4. Entwicklung von 3.000 spezialisierten KI-Assistenten
5. Integration dieser Assistenten in komplexe Workflows
Der taktische Ansatz dahinter ist bemerkenswert:
โณ Nicht einzelne KI-Projekte, sondern eine umfassende Transformation
โณ Keine isolierten Tools, sondern vernetzte Systeme
โณ Kein Fokus auf Stellenabbau, sondern auf Neugestaltung der Arbeit
KI-Integration ist keine einmalige Initiative, sondern ein fortlaufender Prozess der Organisationsentwicklung.
Moderna zeigt, dass der Erfolg nicht von einzelnen Tools abhรคngt, sondern von der strategischen Neugestaltung der Arbeit selbst.
Genau das ist der Weg, den es zu gehen gilt. | 12 Kommentare auf LinkedIn
With more than 260,000 registrations, Google actually broke the Guinness World Records ๐ title for largest attendance at a virtual AI conference in one week.
(I didn't even know that was a thing! ๐ ) Not able to make attend? Here is everything that was covered from theory to application is now available for free...
โก๏ธ Day 1: Foundational Models & Prompt Engineering
https://lnkd.in/d-_w3gXj
โก๏ธ Day 2: Embeddings & Vector Stores / Databases
https://lnkd.in/dkmfDUcp
โก๏ธ Day 3: Generative AI Agents
https://lnkd.in/dd3Zd2-F
โก๏ธ Day 4: Domain-Specific LLMs
https://lnkd.in/d6Z39yqt
โก๏ธ Day 5: MLOps for Generative AI
https://lnkd.in/dcXCTPVF
And, be sure to check out the winners of the course's capstone project: building tools from Generative AI (classroom assistants, schedulers, mock interviewers and more.) https://lnkd.in/dPsXnrct
Interested in putting all of those newly-developed AI skills to use? Here are some of the latest job openings here at Google: http://google.com/careers. Hope to see you around! ๐
#google #lifeatgoogle #training #ai #education | 21 comments on LinkedIn