AI-GenAI

1579 bookmarks
Newest
Samsung AI researcher's new, open reasoning model TRM outperforms models 10,000X larger — on specific problems
Samsung AI researcher's new, open reasoning model TRM outperforms models 10,000X larger — on specific problems
New model out-reasons rivals 10,000x its size: The Tiny Recursion Model (TRM), developed by Samsung researcher Alexia Jolicoeur-Martineau, is a neural network with just 7M parameters that reportedly outperforms much larger models like GPT-4 on some complex reasoning tasks. By repeatedly refining its own predictions rather than relying on raw computational power, the breakthrough suggests that careful architecture design could potentially drive the next wave of AI innovation.
·venturebeat.com·
Samsung AI researcher's new, open reasoning model TRM outperforms models 10,000X larger — on specific problems
Repo Bench – Ranking AI models
Repo Bench – Ranking AI models
Ranking models for large context reasoning, file editing precision, and instruction adherence for coding tasks.
·repoprompt.com·
Repo Bench – Ranking AI models
State of AI Report 2025
State of AI Report 2025
The State of AI Report analyses the most interesting developments in AI. Read and download here.
·stateof.ai·
State of AI Report 2025
Police issue warning over AI home invasion prank
Police issue warning over AI home invasion prank
A trend is going viral on TikTok of people using AI to prank their loved ones that an intruder is in their home. Police are issuing a warning to users that the prank could result in criminal charges. NBC News' Ellison Barber has more on the viral trend.
·nbcnews.com·
Police issue warning over AI home invasion prank
Classrooms embraced AI - training didn’t keep up, CDT warns
Classrooms embraced AI - training didn’t keep up, CDT warns

Prior research and experts warn that spending too much time with AI bots can have a negative effect on in-real-life (IRL) social skills - an outcome which may be more severe for young, developing minds. Teachers who responded to CDT's research appear to agree, as 71 percent said that they're worried AI weakens key academic skills such as writing and critical thinking. . . only 11 percent of teachers said that their training covered how to respond if they suspect a student's use of AI is harming their well-being, for example, hurting self-esteem or encouraging risky behavior.

·theregister.com·
Classrooms embraced AI - training didn’t keep up, CDT warns
The New Talk: The Need To Discuss AI With Kids
The New Talk: The Need To Discuss AI With Kids

“[I]t is a massively more powerful and scary thing than I knew about.” That’s how Adam Raine’s dad characterized ChatGPT when he reviewed his son’s conversations with the AI tool. Adam tragically died by suicide. His parents are now suing OpenAI and Sam Altman, the company’s CEO, based on allegations that the tool contributed to his death.

·thefulcrum.us·
The New Talk: The Need To Discuss AI With Kids
AI Use at Work Has Nearly Doubled in Two Years
AI Use at Work Has Nearly Doubled in Two Years
A recent workforce poll of nearly 20,000 workers, by the prominent pollster Gallup, found that the percentage of U.S. employees who say they have used AI in their jobs at least a few times a year has nearly doubled since its 2023 poll, from 21% to 40%. The percentage of U.S. employees frequently using AI in their jobs has also nearly doubled to 19%.
·gallup.com·
AI Use at Work Has Nearly Doubled in Two Years
AI Reshapes the American Workplace—But Where Are the Jobs?
AI Reshapes the American Workplace—But Where Are the Jobs?
If 2023 was about increasing adoption of AI coming out of the pandemic, experts are saying 2025-26 will be when companies implement deeper changes in the workplace based on ever more pervasive AI.
·thefulcrum.us·
AI Reshapes the American Workplace—But Where Are the Jobs?
Trends and factors affecting generational financial trauma
Trends and factors affecting generational financial trauma
A recent study found that 35% of Gen X and 33% of millennials feel worse off than their parents, far more than the 19% of baby boomers and 17% of Gen Z who say the same. Job losses due to AI joins crushing student loan debt, stagnant wages, and soaring housing costs as key factors in causing many Gen Xers and millennials to doubt if they will ever achieve the same financial stability as their parents.
·creditonebank.com·
Trends and factors affecting generational financial trauma
From Burnout to Balance: AI-Enhanced Work Models for the Future
From Burnout to Balance: AI-Enhanced Work Models for the Future
polled 2,500 professionals, finding that 77% of workers say that generative AI has actually decreased their productivity and increased their workloads. According to the report, many workers feel “overwhelmed by the added workload and complexity it brings,”
·upwork.com·
From Burnout to Balance: AI-Enhanced Work Models for the Future
The Labor Market for Recent College Graduates
The Labor Market for Recent College Graduates
studies confirm what others, such as the New York Federal Reserve, have found, namely that entry-level positions and work opportunities for recent college graduates have “deteriorated noticeably.”
·newyorkfed.org·
The Labor Market for Recent College Graduates
Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence — Stanford Digital Economy Lab
Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence — Stanford Digital Economy Lab
A new study from Stanford University’s Erik Brynjolfsson found that young workers aged 22–25 in “highly AI-exposed” jobs, such as software developers, accountants, and customer service agents, are being replaced by AI at a rapid rate.
·digitaleconomy.stanford.edu·
Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence — Stanford Digital Economy Lab
How AI Impacts the Labor Market - Will Your Job Be Affected?
How AI Impacts the Labor Market - Will Your Job Be Affected?
Marc Benioff, CEO of Salesforce, one of the largest software companies in the world for business productivity applications, says that AI is already doing 30% to 50% of all the work at his company, and he believes that growth will continue.
·youtube.com·
How AI Impacts the Labor Market - Will Your Job Be Affected?
AI 2027
AI 2027
experts who expect quick implementation over the next decade with an impact “exceeding that of the Industrial Revolution.”
·ai-2027.com·
AI 2027
These Jobs Will Fall First As AI Takes Over The Workplace
These Jobs Will Fall First As AI Takes Over The Workplace
Goldman Sachs, one of the world’s largest investment banks, has projected that 300 million jobs around the world could be lost to AI automation over the next 10 years, affecting 25% of the global labor market.
·forbes.com·
These Jobs Will Fall First As AI Takes Over The Workplace
Gemini at Work 2025
Gemini at Work 2025
Today, at our Google Cloud event, we’re announcing Gemini Enterprise, the new front door for AI in the workplace.
·blog.google·
Gemini at Work 2025
The wicked problem of AI and assessment
The wicked problem of AI and assessment

Our findings demonstrate that the GenAI-assessment challenge exhibits all ten characteristics of wicked problems. For instance, it resists definitive formulation, offers only better or worse rather than correct solutions, cannot be tested without consequence, and places significant responsibility on decision-makers. In the light of this redefinition of the AI and Assessment problem, we argue that educators require certain institutional permissions – including permission to compromise, diverge, and iterate – to appropriately navigate the assessment challenges they face.

Compromise: It allows educators to state plainly that this assessment prioritizes X at the expense of Y, and here is why. It transforms institutional culture from one that punishes imperfection to one that learns from it. When we stop seeking perfect solutions, we can start having honest conversations about which trade-offs serve our students best, which failures taught us most, and how to be thoughtfully imperfect rather than accidentally inadequate.

Permission to Diverge: At its core, ‘permission to diverge’ means accepting that successful practices in one educational context need not – and often should not – be replicated elsewhere. It is the recognition that divergent approaches to common challenges can reflect contextual wisdom rather than inconsistency or failure. By granting ourselves permission to diverge, we acknowledge that different contexts might require quite different responses. This recognises that quality manifests differently across years, disciplines, cohort sizes, and professional destinations. The business educator who integrates AI because employers demand it and the nursing educator who restricts it to ensure clinical competence are both appropriate. Divergence can reflect wisdom that we can easily mistake for confusion. This permission transforms institutional expectations from uniformity to fitness for purpose. Divergence becomes a sign of thoughtful response rather than institutional failure.

Permission to iterate: When AI capabilities transform monthly, when student behaviours shift each semester, and when professional requirements evolve constantly, the result can be that educators design assessments for yesterday’s technology, implemented with today’s students, preparing for tomorrow’s unknowns. Permission to iterate recognizes that wicked problems evolve continuously, making fixed solutions obsolete.

The permission to iterate recognizes wicked problems evolve continuously, making fixed solutions obsolete. This permission transforms assessment from a product to be delivered to a practice to be refined.

The path forward requires abandoning the search for silver bullets in favour of developing adaptive capacity. This means creating institutional structures that support educator decision-making rather than mandating uniform responses, recognizing divergent approaches as evidence of contextual wisdom rather than institutional inconsistency, and treating assessment iteration as professional development rather than design failure.

Our findings demonstrate that the GenAI-assessment challenge exhibits all ten characteristics of wicked problems. For instance, it resists definitive formulation, offers only better or worse rather than correct solutions, cannot be tested without consequence, and places significant responsibility on decision-makers. In the light of this redefinition of the AI and Assessment problem, we argue that educators require certain institutional permissions – including permission to compromise, diverge, and iterate – to appropriately navigate the assessment challenges they face.
It allows educators to state plainly that this assessment prioritizes X at the expense of Y, and here is why. It transforms institutional culture from one that punishes imperfection to one that learns from it. When we stop seeking perfect solutions, we can start having honest conversations about which trade-offs serve our students best, which failures taught us most, and how to be thoughtfully imperfect rather than accidentally inadequate.
At its core, ‘permission to diverge’ means accepting that successful practices in one educational context need not – and often should not – be replicated elsewhere. It is the recognition that divergent approaches to common challenges can reflect contextual wisdom rather than inconsistency or failure. By granting ourselves permission to diverge, we acknowledge that different contexts might require quite different responses. This recognises that quality manifests differently across years, disciplines, cohort sizes, and professional destinations. The business educator who integrates AI because employers demand it and the nursing educator who restricts it to ensure clinical competence are both appropriate. Divergence can reflect wisdom that we can easily mistake for confusion. This permission transforms institutional expectations from uniformity to fitness for purpose. Divergence becomes a sign of thoughtful response rather than institutional failure.
When AI capabilities transform monthly, when student behaviours shift each semester, and when professional requirements evolve constantly, the result can be that educators design assessments for yesterday’s technology, implemented with today’s students, preparing for tomorrow’s unknowns. Permission to iterate recognizes that wicked problems evolve continuously, making fixed solutions obsolete.The permission to iterate recognizes that wicked problems evolve continuously, making fixed solutions obsolete.
This permission transforms assessment from a product to be
This permission transforms assessment from a product to be delivered to a practice to be refined
The path forward requires abandoning the search for silver bullets in favour of developing adaptive capacity. This means creating institutional structures that support educator decision-making rather than mandating uniform responses, recognizing divergent approaches as evidence of contextual wisdom rather than institutional inconsistency, and treating assessment iteration as professional development rather than design failure.
·tandfonline.com·
The wicked problem of AI and assessment