Found 4 bookmarks
Newest
Google’s A.I. Search Errors Cause a Furor Online
Google’s A.I. Search Errors Cause a Furor Online
This February, the company released Bard’s successor, Gemini, a chatbot that could generate images and act as a voice-operated digital assistant. Users quickly realized that the system refused to generate images of white people in most instances and drew inaccurate depictions of historical figures.With each mishap, tech industry insiders have criticized the company for dropping the ball. But in interviews, financial analysts said Google needed to move quickly to keep up with its rivals, even if it meant growing pains.Google “doesn’t have a choice right now,” Thomas Monteiro, a Google analyst at Investing.com, said in an interview. “Companies need to move really fast, even if that includes skipping a few steps along the way. The user experience will just have to catch up.”
·nytimes.com·
Google’s A.I. Search Errors Cause a Furor Online
How Perplexity builds product
How Perplexity builds product
inside look at how Perplexity builds product—which to me feels like what the future of product development will look like for many companies:AI-first: They’ve been asking AI questions about every step of the company-building process, including “How do I launch a product?” Employees are encouraged to ask AI before bothering colleagues.Organized like slime mold: They optimize for minimizing coordination costs by parallelizing as much of each project as possible.Small teams: Their typical team is two to three people. Their AI-generated (highly rated) podcast was built and is run by just one person.Few managers: They hire self-driven ICs and actively avoid hiring people who are strongest at guiding other people’s work.A prediction for the future: Johnny said, “If I had to guess, technical PMs or engineers with product taste will become the most valuable people at a company over time.”
Typical projects we work on only have one or two people on it. The hardest projects have three or four people, max. For example, our podcast is built by one person end to end. He’s a brand designer, but he does audio engineering and he’s doing all kinds of research to figure out how to build the most interactive and interesting podcast. I don’t think a PM has stepped into that process at any point.
We leverage product management most when there’s a really difficult decision that branches into many directions, and for more involved projects.
The hardest, and most important, part of the PM’s job is having taste around use cases. With AI, there are way too many possible use cases that you could work on. So the PM has to step in and make a branching qualitative decision based on the data, user research, and so on.
a big problem with AI is how you prioritize between more productivity-based use cases versus the engaging chatbot-type use cases.
we look foremost for flexibility and initiative. The ability to build constructively in a limited-resource environment (potentially having to wear several hats) is the most important to us.
We look for strong ICs with clear quantitative impacts on users rather than within their company. If I see the terms “Agile expert” or “scrum master” in the resume, it’s probably not going to be a great fit.
My goal is to structure teams around minimizing “coordination headwind,” as described by Alex Komoroske in this deck on seeing organizations as slime mold. The rough idea is that coordination costs (caused by uncertainty and disagreements) increase with scale, and adding managers doesn’t improve things. People’s incentives become misaligned. People tend to lie to their manager, who lies to their manager. And if you want to talk to someone in another part of the org, you have to go up two levels and down two levels, asking everyone along the way.
Instead, what you want to do is keep the overall goals aligned, and parallelize projects that point toward this goal by sharing reusable guides and processes.
Perplexity has existed for less than two years, and things are changing so quickly in AI that it’s hard to commit beyond that. We create quarterly plans. Within quarters, we try to keep plans stable within a product roadmap. The roadmap has a few large projects that everyone is aware of, along with small tasks that we shift around as priorities change.
Each week we have a kickoff meeting where everyone sets high-level expectations for their week. We have a culture of setting 75% weekly goals: everyone identifies their top priority for the week and tries to hit 75% of that by the end of the week. Just a few bullet points to make sure priorities are clear during the week.
All objectives are measurable, either in terms of quantifiable thresholds or Boolean “was X completed or not.” Our objectives are very aggressive, and often at the end of the quarter we only end up completing 70% in one direction or another. The remaining 30% helps identify gaps in prioritization and staffing.
At the beginning of each project, there is a quick kickoff for alignment, and afterward, iteration occurs in an asynchronous fashion, without constraints or review processes. When individuals feel ready for feedback on designs, implementation, or final product, they share it in Slack, and other members of the team give honest and constructive feedback. Iteration happens organically as needed, and the product doesn’t get launched until it gains internal traction via dogfooding.
all teams share common top-level metrics while A/B testing within their layer of the stack. Because the product can shift so quickly, we want to avoid political issues where anyone’s identity is bound to any given component of the product.
We’ve found that when teams don’t have a PM, team members take on the PM responsibilities, like adjusting scope, making user-facing decisions, and trusting their own taste.
What’s your primary tool for task management, and bug tracking?Linear. For AI products, the line between tasks, bugs, and projects becomes blurred, but we’ve found many concepts in Linear, like Leads, Triage, Sizing, etc., to be extremely important. A favorite feature of mine is auto-archiving—if a task hasn’t been mentioned in a while, chances are it’s not actually important.The primary tool we use to store sources of truth like roadmaps and milestone planning is Notion. We use Notion during development for design docs and RFCs, and afterward for documentation, postmortems, and historical records. Putting thoughts on paper (documenting chain-of-thought) leads to much clearer decision-making, and makes it easier to align async and avoid meetings.Unwrap.ai is a tool we’ve also recently introduced to consolidate, document, and quantify qualitative feedback. Because of the nature of AI, many issues are not always deterministic enough to classify as bugs. Unwrap groups individual pieces of feedback into more concrete themes and areas of improvement.
High-level objectives and directions come top-down, but a large amount of new ideas are floated bottom-up. We believe strongly that engineering and design should have ownership over ideas and details, especially for an AI product where the constraints are not known until ideas are turned into code and mock-ups.
Big challenges today revolve around scaling from our current size to the next level, both on the hiring side and in execution and planning. We don’t want to lose our core identity of working in a very flat and collaborative environment. Even small decisions, like how to organize Slack and Linear, can be tough to scale. Trying to stay transparent and scale the number of channels and projects without causing notifications to explode is something we’re currently trying to figure out.
·lennysnewsletter.com·
How Perplexity builds product
Learn from others’ experiences with more perspectives on Search
Learn from others’ experiences with more perspectives on Search
In the coming weeks, when you search for something that might benefit from the experiences of others, you may see a Perspectives filter appear at the top of search results. Tap the filter, and you’ll exclusively see long- and short-form videos, images and written posts that people have shared on discussion boards, Q&A sites and social media platforms. We’ll also show more details about the creators of this content, such as their name, profile photo or information about the popularity of their content.
Helpful information can often live in unexpected or hard-to-find places: a comment in a forum thread, a post on a little-known blog, or an article with unique expertise on a topic. Our helpful content ranking system will soon show more of these “hidden gems” on Search, particularly when we think they’ll improve the results.We’ve also worked to improve how we rank review content on Search – for example, web pages that review businesses or destinations – to place greater emphasis on the quality and originality of the information. You’ll now see more pages that are based on first-hand experience, or are created by someone with deep knowledge in a given subject. And as we underscore the importance of “experience” as an element of helpful content, we continue our focus on information quality and critical attributes like authoritativeness, expertise and trustworthiness, so you can rely on the information you find.
·blog.google·
Learn from others’ experiences with more perspectives on Search
20 Years of SEO: A Brief History of Search Engine Optimization
20 Years of SEO: A Brief History of Search Engine Optimization
In 2011, Google found its search results facing severe scrutiny because so-called “content farms” (websites that produced high volumes of low-quality content) were dominating the search results. Google’s SERPs were also cluttered with websites featuring unoriginal and auto-generated content – and even, in some instances, scraper sites were outranking content originators
·searchenginejournal.com·
20 Years of SEO: A Brief History of Search Engine Optimization