Library

Library

222 bookmarks
Custom sorting
She Says: Women In News
She Says: Women In News
She Says: Women in News, which originally aired on PBS in 2001, is about ten women in positions of power within journalism and the effect they are having on the profession, and the world. The ten women examined include: Judy Woodruff, CNN prime anchor; Carole Simpson, ABC News anchor; Anna Quindlen, Newsweek columnist; Narda Zacchino, San Francisco Chronicle senior editor; Geneva Overholser, syndicated columnist; Nina Totenberg, National Public Radio's legal affairs correspondent since 1975; Rena Pederson, The Dallas Morning News editorial page editor; Helen Thomas, Heart Newspapers political reporter; Judy Crichton, first woman producer, director, and writer for CBS Reports; and Paula Madison, president and general manager at NBC4 in Los Angeles.
·academiccommons.columbia.edu·
She Says: Women In News
The New Global Journalism: Foreign Correspondence in Transition
The New Global Journalism: Foreign Correspondence in Transition
The digital journalist uses a host of new electronic sources, tools, and practices that are now part of the global reporting landscape. Digital journalists would argue that in the right circumstances, these tools enable them to offer as clear and informed a report as what the journalist on the ground can produce–sometimes even clearer, because they may have access to a broader spectrum of material than a field reporter. In mainstream newsrooms, though, there is still significant skepticism about digital’s impact on foreign reporting. Many see it as the end of the era when a reporter could spend a full day–or days or weeks–reporting in the field before sitting down to write. That traditional foreign correspondent model served audiences well, bringing them vivid accounts of breaking news and nuanced analysis of longer-term developments.At the Tow Center, we believe that both forms of reporting are vital, that both are necessary to help all of us understand the world. A goal of this report is to narrow or eliminate the divide between the two, and in this spirit we lay out several objectives. First, our authors work to provide a clear picture of this new reporting landscape: Who are the primary actors, and what does the ecosystem of journalists, citizens, sources, tools, practices, and challenges look like? Second, we urge managers at both mainstream and digital native media outlets to embrace both kinds of reporting, melding them into a new international journalism that produces stories with greater insight. Third, we hope to show the strengths in traditional and digital foreign reporting techniques, with a goal of defining a hybrid foreign correspondent model–not a correspondent who can do everything, but one open to using all reporting tools and a wide range of sources. Finally, we outline governance issues in this new space–legal and operational–with an aim to help journalists report securely and independently in this digital age. We approach these issues through five chapters, whose authors include journalists from both digital-native and mainstream media, as well as a communications scholar and a media producer for a human rights organization. While each writes from a different vantage, the overlapping insights and conclusions begin to redefine both the edges and heart of international reportage now.
·academiccommons.columbia.edu·
The New Global Journalism: Foreign Correspondence in Transition
Post Industrial Journalism: Adapting to the Present
Post Industrial Journalism: Adapting to the Present
This essay is part survey and part manifesto, one that concerns itself with the practice of journalism and the practices of journalists in the United States. It is not, however, about “the future of the news industry,” both because much of that future is already here and because there is no such thing as the news industry anymore. If you wanted to sum up the past decade of the news ecosystem in a single phrase, it might be this: Everybody suddenly got a lot more freedom. The newsmakers, the advertisers, the startups, and, especially, the people formerly known as the audience have all been given new freedom to communicate, narrowly and broadly, outside the old strictures of the broadcast and publishing models. The past 15 years have seen an explosion of new tools and techniques, and, more importantly, new assumptions and expectations, and these changes have wrecked the old clarity. Many of the changes talked about in the last decade as part of the future landscape of journalism have already taken place; much of journalism’s imagined future is now its lived-in present. (As William Gibson noted long ago, “The future is already here. It’s just unevenly distributed.”) Our goal is to write about what has already happened and what is happening today, and what we can learn from it, rather than engaging in much speculation.
·academiccommons.columbia.edu·
Post Industrial Journalism: Adapting to the Present
Play The News: Fun and Games in Digital Journalism
Play The News: Fun and Games in Digital Journalism
More than ever before we’re consuming news in strange contexts; mixed into a stream of holiday photos on Facebook, alongside comedians’ quips on twitter; between Candy Crush and transit directions on our smartphones. In this environment designers can take liberties with the form of the news package and the ways that audiences can interact. But it’s not just users who are invited to experiment with their news: in newsrooms and product development departments, developers and journalists are adopting play as design and authoring process. Maxwell Foxman‘s new Tow Center report, Play The News: Fun and Games in Digital Journalism is a comprehensive documentation of this world.
·academiccommons.columbia.edu·
Play The News: Fun and Games in Digital Journalism
Amateur Footage: A Global Study of User-Generated Content
Amateur Footage: A Global Study of User-Generated Content
Aim of Research The aim of this research is to provide the first comprehensive report about the use of user-generated content (UGC) among broadcast news channels. Its objectives are to understand how much UGC is used on air and online by these channels, why editors and journalists choose to use it, and under what conditions it is employed. The study intends to provide a holistic understanding of the use of UGC by international broadcast news channels. Methodology This research was carried out in two phases. The first involved an in-depth, quantitative content analysis examining when and how eight international news broadcasters use UGC. For this part of the research we analyzed a total of 1,164 hours of TV output and 2,254 Web pages, coding them according to parameters intended to answer the research questions. The second phase of the research was entirely qualitative. It was designed to build upon the first phase by providing a detailed overview of the professional practices that underpin the collection, verification, and distribution of UGC. To achieve this we conducted 64 interviews with news managers, editors, and journalists from 38 news organizations based in 24 countries around the world. This report brings together both phases of the research to provide a detailed overview of the key findings.
·academiccommons.columbia.edu·
Amateur Footage: A Global Study of User-Generated Content
Reducing Third Parties in the Network through Client-Side Intelligence
Reducing Third Parties in the Network through Client-Side Intelligence
The end-to-end argument describes the communication between a client and server using functionality that is located at the end points of a distributed system. From a security and privacy perspective, clients only need to trust the server they are trying to reach instead of intermediate system nodes and other third-party entities. Clients accessing the Internet today and more specifically the World Wide Web have to interact with a plethora of network entities for name resolution, traffic routing and content delivery. While individual communications with those entities may some times be end to end, from the user's perspective they are intermediaries the user has to trust in order to access the website behind a domain name. This complex interaction lacks transparency and control and expands the attack surface beyond the server clients are trying to reach directly. In this dissertation, we develop a set of novel design principles and architectures to reduce the number of third-party services and networks a client's traffic is exposed to when browsing the web. Our proposals bring additional intelligence to the client and can be adopted without changes to the third parties. Websites can include content, such as images and iframes, located on third-party servers. Browsers loading an HTML page will contact these additional servers to satisfy external content dependencies. Such interaction has privacy implications because it includes context related to the user's browsing history. For example, the widespread adoption of "social plugins" enables the respective social networking services to track a growing part of its members' online activity. These plugins are commonly implemented as HTML iframes originating from the domain of the respective social network. They are embedded in sites users might visit, for instance to read the news or do shopping. Facebook's Like button is an example of a social plugin. While one could prevent the browser from connecting to third-party servers, it would break existing functionality and thus be unlikely to be widely adopted. We propose a novel design for privacy-preserving social plugins that decouples the retrieval of user-specific content from the loading of third-party content. Our approach can be adopted by web browsers without the need for server-side changes. Our design has the benefit of avoiding the transmission of user-identifying information to the third-party server while preserving the original functionality of the plugins. In addition, we propose an architecture which reduces the networks involved when routing traffic to a website. Users then have to trust fewer organizations with their traffic. Such trust is necessary today because for example we observe that only 30% of popular web servers offer HTTPS. At the same time there is evidence that network adversaries carry out active and passive attacks against users. We argue that if end-to-end security with a server is not available the next best thing is a secure link to a network that is close to the server and will act as a gateway. Our approach identifies network vantage points in the cloud, enables a client to establish secure tunnels to them and intelligently routes traffic based on its destination. The proliferation of infrastructure-as-a-service platforms makes it practical for users to benefit from the cloud. We determine that our architecture is practical because our proposed use of the cloud aligns with existing ways end-user devices leverage it today. Users control both endpoints of the tunnel and do not depend on the cooperation of individual websites. We are thus able to eliminate third-party networks for 20% of popular web servers, reduce network paths to 1 hop for an additional 20% and shorten the rest. We hypothesize that user privacy on the web can be improved in terms of transparency and control by reducing the systems and services that are indirectly and automatically involved. We also hypothesize that such reduction can be achieved unilaterally through client-side initiatives and without affecting the operation of individual websites.
·academiccommons.columbia.edu·
Reducing Third Parties in the Network through Client-Side Intelligence
Characterizing and Leveraging Social Phenomena in Online Networks
Characterizing and Leveraging Social Phenomena in Online Networks
Social phenomena have been studied extensively in small scales by social scientists. With the increasing popularity of Web 2.0 and online social networks/media, a large amount of data on social phenomena have become available. In this dissertation we study online social phenomena such as social influence in social networks in various contexts. This dissertation has two major components: 1. Identifying and characterizing online social phenomena 2. Leveraging online social phenomena for economic and commercial purposes. We begin the dissertation by developing multi-level revenue sharing schemes for viral marketing on social networks. Viral marketing leverages social influence among users of the social network. For our proposed models, we develop results on the computational complexity, individual rationality, and potential reach of employing the Shapley value as a revenue sharing scheme. Our results indicate that under the multi-level tree-based propagation model, the Shapley value is a promising scheme for revenue sharing, whereas under other models there are computational or incentive compatibility issues that remain open. We continue with another application of social influence: social advertising. Social advertising is a new paradigm that is utilized by online social networks. Social advertising is based in the premise that social influence can be leveraged to place ads more efficiently. The goal of our work is to understand how social ads can affect click-through rates in social networks. We propose a formal model for social ads in the context of display advertising. In our model, ads are shown to users one after the other. The probability of a user clicking an ad depends on the users who have clicked this ad so far. This information is presented to users as a social cue, thus the click probability is a function of this cue. We introduce the social display optimization problem: suppose an advertiser has a contract with a publisher for showing some number (say B) impressions of an ad. What strategy should the publisher use to show these ads so as to maximize the expected number of clicks? We show hardness results for this problem and in light of the general hardness results, we develop heuristic algorithms and compare them to natural baseline ones. We then study distributed content curation on the Web. In recent years readers have turned to the social web to consume content. In other words, they rely on their social network to curate content for them as opposed to the more traditional way of relying on news editors for this purpose -- this is an implicit consequence of social influence as well. We study how efficient this is for users with limited budgets of attention. We model distributed content curation as a reader-publisher game and show various results. Our results imply that in the complete information setting, when publishers maximize their utility selfishly, distributed content curation reaches an equilibrium which is efficient, that is, the social welfare is a constant factor of that under an optimal centralized curation. Next, we initiate the study of an exchange market problem without money that is a natural generalization of the well-studied kidney exchange problem. From the practical point of view, the problem is motivated by barter websites on the Internet, e.g., swap.com, and u-exchange.com. In this problem, the users of the social network wish to exchange items with each other. A mechanism specifies for each user a set of items that she gives away, and a set of items that she receives. Consider a set of agents where each agent has some items to offer, and wishes to receive some items from other agents. Each agent would like to receive as many items as possible from the items that she wishes, that is, her utility is equal to the number of items that she receives and wishes. However, she will have a large dis-utility if she gives away more items than what she receives, because she considers such a trade to be unfair. To ensure voluntary participation (also known as individual rationality), we require the mechanism to avoid this. We consider different variants of this problem: with and without a constraint on the length of the exchange cycles and show different results including their truthfulness and individual rationality. In the other main component of this thesis, we study and characterize two other social phenomena: 1. friends vs. the crowd and 2. altruism vs. reciprocity in social networks. More specifically, we study how a social network user's actions are influenced by her friends vs. the crowd's opinion. For example, in social rating websites where both ratings from friends and average ratings from everyone is available, we study how similar one's ratings are to each other. In the next part, we aim to analyze the motivations behind users' actions on online social media over an extended period of time. We look specifically at users' likes, comments and favorite markings on their friends' posts and photos. Most theories of why people exhibit prosocial behavior isolate two distinct motivations: Altruism and reciprocity. In our work, we focus on identifying the underlying motivations behind users' prosocial giving on social media. In particular, our goal is to identify if the motivation is altruism or reciprocity. For that purpose, we study two datasets of sequence of users' actions on social media: a dataset of wall posts by users of Facebook.com, and another dataset of favorite markings by users of Flickr.com. We study the sequence of users' actions in these datasets and provide several observations on patterns related to their prosocial giving behavior.
·academiccommons.columbia.edu·
Characterizing and Leveraging Social Phenomena in Online Networks
Identification and Characterization of Events in Social Media
Identification and Characterization of Events in Social Media
Millions of users share their experiences, thoughts, and interests online, through social media sites (e.g., Twitter, Flickr, YouTube). As a result, these sites host a substantial number of user-contributed documents (e.g., textual messages, photographs, videos) for a wide variety of events (e.g., concerts, political demonstrations, earthquakes). In this dissertation, we present techniques for leveraging the wealth of available social media documents to identify and characterize events of different types and scale. By automatically identifying and characterizing events and their associated user-contributed social media documents, we can ultimately offer substantial improvements in browsing and search quality for event content. To understand the types of events that exist in social media, we first characterize a large set of events using their associated social media documents. Specifically, we develop a taxonomy of events in social media, identify important dimensions along which they can be categorized, and determine the key distinguishing features that can be derived from their associated documents. We quantitatively examine the computed features for different categories of events, and establish that significant differences can be detected across categories. Importantly, we observe differences between events and other non-event content that exists in social media. We use these observations to inform our event identification techniques. To identify events in social media, we follow two possible scenarios. In one scenario, we do not have any information about the events that are reflected in the data. In this scenario, we use an online clustering framework to identify these unknown events and their associated social media documents. To distinguish between event and non-event content, we develop event classification techniques that rely on a rich family of aggregate cluster statistics, including temporal, social, topical, and platform-centric characteristics. In addition, to tailor the clustering framework to the social media domain, we develop similarity metric learning techniques for social media documents, exploiting the variety of document context features, both textual and non-textual. In our alternative event identification scenario, the events of interest are known, through user-contributed event aggregation platforms (e.g., Last.fm events, EventBrite, Facebook events). In this scenario, we can identify social media documents for the known events by exploiting known event features, such as the event title, venue, and time. While this event information is generally helpful and easy to collect, it is often noisy and ambiguous. To address this challenge, we develop query formulation strategies for retrieving event content on different social media sites. Specifically, we propose a two-step query formulation approach, with a first step that uses highly specific queries aimed at achieving high-precision results, and a second step that builds on these high-precision results, using term extraction and frequency analysis, with the goal of improving recall. Importantly, we demonstrate how event-related documents from one social media site can be used to enhance the identification of documents for the event on another social media site, thus contributing to the diversity of information that we identify. The number of social media documents that our techniques identify for each event is potentially large. To avoid overwhelming users with unmanageable volumes of event information, we design techniques for selecting a subset of documents from the total number of documents that we identify for each event. Specifically, we aim to select high-quality, relevant documents that reflect useful event information. For this content selection task, we experiment with several centrality-based techniques that consider the similarity of each event-related document to the central theme of its associated event and to other social media documents that correspond to the same event. We then evaluate both the relative and overall user satisfaction with the selected social media documents for each event. The existing tools to find and organize social media event content are extremely limited. This dissertation presents robust ways to organize and filter this noisy but powerful event information. With our event identification, characterization, and content selection techniques, we provide new opportunities for exploring and interacting with a diverse set of social media documents that reflect timely and revealing event content. Overall, the work presented in this dissertation provides an essential methodology for organizing social media documents that reflect event information, towards improved browsing and search for social media event data.
·academiccommons.columbia.edu·
Identification and Characterization of Events in Social Media
Lies, Damn Lies and Viral Content
Lies, Damn Lies and Viral Content
To track and analyze the way news sites handle online rumors and unverified claims, I and research assistant Jocelyn Jurich identified rumors circulating in online media, and captured and analyzed them using the Emergent database and related public website. I also spoke with journalists, skeptics and others engaged in efforts to debunk online misinformation. This report is the result of our research. One key conclusion is that journalists are squandering much of the value of rumors and emerging news by moving too quickly and thoughtlessly to propagation. News websites dedicate far more time and resources to propagating questionable and often false claims than they do working to verify and/or debunk viral content and online rumors. The debunking efforts that do exist at news organizations are scattershot and are not rooted in best practices identified in previous research. Rather than acting as a source of accurate information, online media frequently promote misinformation in an attempt to drive traffic and social engagement. The result is a situation where lies spread much farther than the truth, and news organizations play a powerful role in making this happen.This research has quantified many bad practices of online media. In doing so, it clearly articulates areas for improvement. I also believe it reveals a way forward where news organizations move to occupy the middle ground between mindless propagation and wordless restraint. Journalists today have an imperative—and an opportunity—to sift through the mass of content being created and shared in order to separate true from false, and to help the truth to spread. This report includes a set of specific and, where possible, data driven recommendations for how this anti-viral viral strategy can be executed. My hope is that this report helps newsrooms see the bad practices that must be stopped, and to apply better strategies for reporting in our new world of emergent news.
·academiccommons.columbia.edu·
Lies, Damn Lies and Viral Content
Guide to Crowdsourcing
Guide to Crowdsourcing
The term “crowdsourcing” has been around for a decade. Although Wired writer Jeff Howe coined it in 2006, the ways in which news organizations define and employ it today vary enormously. This guide is organized around a specific journalism-related definition of crowdsourcing and provides a new typology designed to help practitioners and researchers understand the different ways crowdsourcing is being used both inside and outside newsrooms. This typology is explored via interviews and case studies. The research shows that crowdsourcing is credited with helping to create amazing acts of journalism. It has transformed newsgathering by introducing unprecedented opportunities for attracting sources with new voices and information, allowed news organizations to unlock stories that otherwise might not have surfaced, and created opportunities for news organizations to experiment with the possibilities of engagement just for the fun of it. Certainly, though, crowdsourcing can be high-touch and high-energy, and not all projects work the first time. To be sure, crowdsourcing businesses are flourishing outside of journalism. But within the news industry, wider systemic adoption may depend on more than enthusiasm from experienced practitioners and accolades from sources thrilled by the outreach.
·academiccommons.columbia.edu·
Guide to Crowdsourcing
Guide to Chat Apps
Guide to Chat Apps
Messaging apps now have more global users than traditional social networks—which means they will play an increasingly important role in the distribution of digital journalism in the future. While chat platforms initially rose to prominence by offering a low-cost, web-based alternative to SMS, over time they evolved into multimedia hubs that support photos, videos, games, payments, and more. While many news organizations don’t yet use messaging apps, digitally savvy outlets like BuzzFeed, Mashable, The Huffington Post, and VICE have accompanied a more traditional player in BBC News by establishing a presence on a number of these platforms. To complement our research, we interviewed leadership at multiple news outlets and chat platforms, thereby synthesizing key lessons and presenting notable case studies reflecting the variety of creative and strategic work taking place within the messaging space. Most publisher efforts around messaging apps are still in a formative, experimental stage, but even those have often proven effective in diversifying traffic sources for digital content. Our research indicates that one of the greatest benefits of chat apps is the opportunity to use these platforms as live, sandbox environments. The chance to play and iterate has helped several news organizations develop mobile-first content and experiential offerings that would have proved difficult in other digital environments. As these services primarily—and in some cases exclusively—exist on mobile phones, editorial teams have learned to focus purely on the mobile experience, freeing themselves from considerations about how content will appear on desktop websites or other broadcast mediums. In developing editorial strategies for some of these wide-ranging messaging platforms, news organizations are not just helping to future-proof themselves, they are also venturing into online spaces that could enable them to reach hundreds of millions of (often young) people with whom they have never engaged before.
·academiccommons.columbia.edu·
Guide to Chat Apps
New Frontiers in Newsgathering: A Case Study of Foreign Correspondents Using Chat Apps to Cover Political Unrest
New Frontiers in Newsgathering: A Case Study of Foreign Correspondents Using Chat Apps to Cover Political Unrest
Coverage of any breaking news event today often includes footage captured by eyewitnesses and uploaded to the social web. This has changed how journalists and news organizations not only report and produce news, but also how they engage with sources and audiences. In addition to social media platforms such as Twitter and Facebook, chat apps such as WhatsApp and WeChat are a rapidly growing source of information about newsworthy events and an essential link between participants and reporters covering those events. To look at how journalists at major news organizations use chat apps for newsgathering during political unrest, the authors focus on a case study of foreign correspondents based in Hong Kong and China during and since the 2014 Umbrella Movement Hong Kong protests. Political unrest in Hong Kong and China often centers around civic rights and government corruption. The Umbrella Movement involved large-scale, sit-in street protests, rejecting proposed changes to Hong Kong’s electoral laws and demanding voting rights for all Hong Kong citizens. Through a combination of observation and interviews with foreign correspondents, this report explores technology’s implications for reporting political unrest: how and why the protestors and official sources used chat apps, and the ways foreign reporters used chat apps (which are typically closed platforms) for newsgathering, internal coordination, and information sharing.
·academiccommons.columbia.edu·
New Frontiers in Newsgathering: A Case Study of Foreign Correspondents Using Chat Apps to Cover Political Unrest
Toward a Constructive Technology Criticism
Toward a Constructive Technology Criticism
In this report, the author draws on interviews with journalists and critics, as well as a broad reading of published work, to assess the current state of technology coverage and criticism in the popular discourse, and to offer some thoughts on how to move the critical enterprise forward. Tow Fellow Sara Watson finds that what it means to cover technology is a moving target. Today, the technology beat focuses less on the technology itself and more on how technology intersects with and transforms everything readers care about—from politics to personal relationships. But as technology coverage matures, the distinctions between reporting and criticism are blurring. Even the most straightforward reporting plays a role in guiding public attention and setting agendas. Further, she finds that technology criticism is too narrowly defined. First, criticism carries negative connotations—that of criticizing with unfavorable opinions rather than critiquing to offer context and interpretation. Strongly associated with notions of progress, technology criticism today skews negative and nihilistic. Second, much of the criticism coming from people widely recognized as “critics” perpetuates these negative associations by employing problematic styles and tactics, and by exercising unreflexive assumptions and ideologies. As a result, many journalists and bloggers are reluctant to associate their work with criticism or identify themselves as critics. And yet she finds a larger circle of journalists, bloggers, academics, and critics contributing to the public discourse about technology and addressing important questions by applying a variety of critical lenses to their work. Some of the most novel critiques about technology and Silicon Valley are coming from women and underrepresented minorities, but their work is seldom recognized in traditional critical venues. As a result, readers may miss much of the critical discourse about technology if they focus only on the work of a few, outspoken intellectuals. Even if a wider set of contributions to the technology discourse is acknowledged, she finds that technology criticism still lacks a clearly articulated, constructive agenda. Besides deconstructing, naming, and interpreting technological phenomena, criticism has the potential to assemble new insights and interpretations. In response to this finding, she lays out the elements of a constructive technology criticism that aims to bring stakeholders together in productive conversation rather than pitting them against each other. Constructive criticism poses alternative possibilities. It skews toward optimism, or at least toward an idea that future technological societies could be improved. Acknowledging the realities of society and culture, constructive criticism offers readers the tools and framings for thinking about their relationship to technology and their relationship to power. Beyond intellectual arguments, constructive criticism is embodied, practical, and accessible, and it offers frameworks for living with technology.
·academiccommons.columbia.edu·
Toward a Constructive Technology Criticism
Guide to SecureDrop
Guide to SecureDrop
This report offers a guide to the use and significance of SecureDrop, an in-house system for news organizations to securely communicate with anonymous sources and receive documents over the Internet. SecureDrop itself is a very young technology. It was developed over the last four years, beginning during the period when the WikiLeaks submission system was down and it was unclear how else whistleblowers could safely transmit large caches of data to journalists. The history of SecureDrop’s conception and development is thus entwined with some of the most striking moments in the recent history of digital journalism: the arrival of Julian Assange as a charismatic force calling for radical transparency; the remarkable life of the technology activist Aaron Swartz; the bravery of Edward Snowden in revealing the level of surveillance now exercised by government agencies worldwide; and the resulting alliance between journalists, activists, and hackers who wish to ensure the accountability of powerful organizations by publishing information in the public interest. Through interviews with the technologists who conceived and developed SecureDrop, as well as the journalists presently using it, this report offers a sketch of the concerns that drive the need for such a system, as well as the practices that emerge when a news organization integrates this tool into its news gathering routines.
·academiccommons.columbia.edu·
Guide to SecureDrop
Guide to Automated Journalism
Guide to Automated Journalism
In recent years, the use of algorithms to automatically generate news from structured data has shaken up the journalism industry—most especially since the Associated Press, one of the world’s largest and most well-established news organizations, has started to automate the production of its quarterly corporate earnings reports. Once developed, not only can algorithms create thousands of news stories for a particular topic, they also do it more quickly, cheaply, and potentially with fewer errors than any human journalist. Unsurprisingly, then, this development has fueled journalists’ fears that automated content production will eventually eliminate newsroom jobs, while at the same time scholars and practitioners see the technology’s potential to improve news quality. This guide summarizes recent research on the topic and thereby provides an overview of the current state of automated journalism, discusses key questions and potential implications of its adoption, and suggests avenues for future research. Some of the key points can be summarized as follows.
·academiccommons.columbia.edu·
Guide to Automated Journalism
You Are Here: Site-specific storytelling using offline networks
You Are Here: Site-specific storytelling using offline networks
Unlike their twentieth-century counterparts, today’s media organizations rely almost entirely on the centralized distribution infrastructure of the internet to disseminate news. Yet the internet is, in many ways, a fragile system, as illustrated by disruptive events like 2012’s Hurricane Sandy and 2016’s Mirai botnet attack on East Coast DNS servers. Over the last decade, however, the evolution of microcomputers has made it possible to build small, independent web servers that can host substantial amounts of material accessible via their own, standalone Wi-Fi signal. Such offline wireless projects have been used in classrooms, protest sites, libraries, and even for news. The goal of the You Are Here project was to develop and document a fully open-source, offline wireless system and explore how it could be used to engage audiences with community-oriented news content. Over the course of one year, our team designed, built, and tested You Are Here at two New York City locations using originally reported podcast stories to prompt users to share their own reflections and experiences about the sites. While our project suffered from some the same challenges as previous systems, we believe that offline wireless systems hold substantial promise for safe, resilient, independent digital news distribution.
·academiccommons.columbia.edu·
You Are Here: Site-specific storytelling using offline networks
Pushed beyond breaking: US newsrooms use mobile alerts to define their brand
Pushed beyond breaking: US newsrooms use mobile alerts to define their brand
The aim of this research is to provide a comprehensive overview of how U.S. news outlets are using mobile push alerts to reach their audiences. Its objectives are to better understand how and why news outlets are using mobile push alerts, the decision-making process and workflows behind their use, how metrics inform strategy, and the major challenges presented by push alerts and how outlets have tackled them. The study intends to provide a detailed understanding of the use of mobile push alerts by news outlets of all sizes and backgrounds. This research took part in two phases. The first involved a quantitative content analysis examining when and how news outlets send push alerts. For this part of the research we analyzed 2,578 push alerts—2,085 from thirty-one iOS apps and 492 from fourteen Apple News channels. These alerts were collected over the three-week period between June 19 and July 9, 2017 using an iPhone 6 running iOS 10. They were coded manually using a coding scheme devised to address our research questions. The second part of the research involved twenty-three, semi-structured interviews with audience managers, mobile editors, and product managers from a range of U.S. news outlets. These interviews focused on strategy and workflows, addressing issues such as how and why different outlets decide what to push, how and why they approach Apple News differently, their objectives for push alerts, how metrics are used to inform strategy, and major challenges that push alerts present. This report combines the findings from both phases of the research to provide a detailed overview of how U.S. newsrooms are approaching mobile push alerts.
·academiccommons.columbia.edu·
Pushed beyond breaking: US newsrooms use mobile alerts to define their brand
Local News in a Digital World: Small-Market Newspapers in the Digital Age
Local News in a Digital World: Small-Market Newspapers in the Digital Age
Too often we tend to hear one single narrative about the state of newspapers in the United States. The newspaper industry is not one sector. While there are considerable variances between the myriad of outlets—whether national titles, major metros, dailies in large towns, alt weeklies, publications in rural communities, ethnic press, and so on—a major challenge for anyone trying to make sense of industry data is its aggregated nature. It’s nearly impossible to deduce trends or characteristics at a more granular level. The story of local newspapers with circulations below 50,000, or what we call “small-market newspapers,” tends to get overlooked due to the narrative dominance of larger players. However, small-market publications represent a major cohort that we as a community of researchers know very little about, and a community of practitioners that too often—we were told—knows little about itself. Our study seeks to help redress this recent imbalance. We embarked on our research with a relatively simple yet ambitious research question: How are small-market newspapers responding to digital disruption? From the data collected in our research, we also strove to report on the future of small-market newspapers by asking: How can small-market newspapers best prepare for the future? Our research findings are based on interviews with fifty-three experts from across the publishing industry, academia, and foundations with a strong interest in the local news landscape. So as to make a fair assessment of the topic’s placement against a wider news background, we did not limit ourselves just to those with immediate connection to small-market newspapers. From these conversations and our own analysis, seven key themes emerged.
·academiccommons.columbia.edu·
Local News in a Digital World: Small-Market Newspapers in the Digital Age
Hungry for transparency: Audience attitudes towards distributed journalism in four US cities
Hungry for transparency: Audience attitudes towards distributed journalism in four US cities
As more and more people get at least some of their news from social platforms, this study showcases perspectives on what the increasingly distributed environment looks like in day-to-day media lives. Drawing from thirteen focus groups conducted in four cities across the United States, we sample voices of residents who reflect on their news habits, the influence of algorithms, local news, brands, privacy concerns, and what all this means for journalistic business models.
·academiccommons.columbia.edu·
Hungry for transparency: Audience attitudes towards distributed journalism in four US cities
Artificial Intelligence: Practice and Implications for Journalism
Artificial Intelligence: Practice and Implications for Journalism
The increasing presence of artificial intelligence and automated technology is changing journalism. While the term artificial intelligence dates back to the 1950s, and has since acquired several meanings, there is a general consensus around the nature of AI as the theory and development of computer systems able to perform tasks normally requiring human intelligence. Since many of the AI tools journalists are now using come from other disciplines—computer science, statistics, and engineering, for example—they tend to be general purpose. Now that journalists are using AI in the newsroom, what must they know about these technologies, and what must technologists know about journalistic standards when building them? On June 13, 2017, the Tow Center for Digital Journalism and the Brown Institute for Media Innovation convened a policy exchange forum of technologists and journalists to consider how artificial intelligence is impacting newsrooms and how it can be better adapted to the field of journalism. The gathering explored questions like: How can journalists use AI to assist the reporting process? Which newsroom roles might AI replace? What are some areas of AI that news organizations have yet to capitalize on? Will AI eventually be a part of the presentation of every news story?
·academiccommons.columbia.edu·
Artificial Intelligence: Practice and Implications for Journalism
Understanding the General Data Protection Regulation: A Primer for Global Publishers
Understanding the General Data Protection Regulation: A Primer for Global Publishers
The General Data Protection Regulation (GDPR) is a major piece of EU legislation that will transform the operating landscape for any organization that handles data about EU residents. While the regulation, which goes into enforcement effect on May 25, 2018, will have the greatest impact on technology companies and advertising networks that directly monetize user data, the media companies that often depend on them for both reach and revenue will also be significantly affected by the changes it brings—both directly and indirectly. The goal of this report is to provide an overview of the regulation and its likely impacts for news organizations and publishers with primary audiences outside the European Union. Unlike its predecessors, the GDPR applies to organizations that collect data about EU residents, whether or not that organization has a physical presence in the EU. What’s more, violations can incur fines of twenty million euros or more. Thus, while non-EU news organizations may be less likely to come under immediate scrutiny because of the GDPR, they are still subject to its provisions and will benefit from thinking strategically about many of the issues it addresses. Moreover, the GDPR is likely to cause substantial changes in the operations of both digital platform and advertising companies, the effects of which will have undeniable consequences for publishers.
·academiccommons.columbia.edu·
Understanding the General Data Protection Regulation: A Primer for Global Publishers
The Future of Local News in New York City
The Future of Local News in New York City
US-based reporting jobs are increasingly concentrated within a small number of major metropolitan areas, driven by digital journalism outlets, according to research over the past few years from media analysts like Joshua Benton at Harvard’s Nieman Lab and Jack Shafer and Tucker Doherty of Politico. As for cities where journalism jobs still flourish, New York City is atop that list. According to a 2015 analysis by Jim Tankersley in The Washington Post, the number of reporting jobs in New York basically held steady in the years between 2004 and 2014, while the number of reporting jobs outside that city, Los Angeles, and Washington, D.C., dropped by 25 percent in the same time period. However, the proliferation of new, often unstable digital journalism hiring booms in the largest city in the US has masked just how dire the situation is for local reporting. Paul Moses illustrated this aptly in a 2017 piece for The Daily Beast, based on research for the CUNY Graduate School of Journalism’s Urban Reporting Program, highlighting a lack of any dedicated reporter covering Queens County courts (which would be the nation’s fourth largest city if it stood on its own). He wrote, “The problem for local news coverage is the simple fact that a story aimed at a national audience is likelier to generate heavy web traffic than a local one. Original local news reporting is threatened not only by layoffs but by the transfer of jobs to writing on whatever is of interest to a national web audience.” This common concern for the troubling state of local news in New York City led the Tow Center for Digital Journalism at Columbia University, the New York City Mayor’s Office of Media and Entertainment, and WNYC to convene an off-the-record roundtable discussion focused on The Future of Local News on February 9, 2018, at the Columbia University School of Journalism. The goal of the discussion was to bring together a select group of journalists, publishers, academics, funders, public-sector representatives, and other experts to discuss how to reverse the crisis in poorly resourced New York local media and work toward innovative solutions to ensure a sustainable future for local news. The half-day roundtable took place in the morning and comprised a closed discussion built around three major questions: 1) What is the state of local journalism in New York City at the beginning of 2018? 2) What trends and emerging business models in local news across the US and internationally might we be able to learn from? 3)Where do we go from here? What are possible futures for local media in New York?
·academiccommons.columbia.edu·
The Future of Local News in New York City
The Future of Advertising and Publishing
The Future of Advertising and Publishing
The rise of digital media has fundamentally changed the relationship between marketers and publishers. As audiences increasingly move toward mobile consumption, publishers have had to adapt their business models based on new standards set by social media platforms and advertisers. They are now in competition with large tech companies to reach, and to own, the same audiences. The adtech ecosystem, which was designed by platforms and advertisers to capitalize on the growing amount of data retained about readers, has produced a mess of publishers’ main monetization strategies—leaving the entire space in dire need of evaluation and experimentation. On October 20, 2017, the Tow Center for Digital Journalism at Columbia University, the Digital Initiative at Harvard Business School, and the Shorenstein Center on Media, Politics, and Public Policy at the Harvard Kennedy School hosted a Policy Exchange Forum (PEF) and public conference to explore these shifts in “The Future of Advertising and Publishing.” The PEF took place in the morning and comprised a closed discussion (by invitation only) built around two major questions: What is the future of the relationship between publishers and advertisers? More specifically, how can platforms, news publishers, and advertisers ensure a robust future for news publishers by shaping the quality of advertising?
·academiccommons.columbia.edu·
The Future of Advertising and Publishing
Digital Adaptation in Local News
Digital Adaptation in Local News
More than a quarter century after the creation of the World Wide Web, nine in ten Americans get at least some news online. But in many ways, local news publishing is still adapting to the internet as a news medium. For many publishers, the internet is like an ill-fitting suit: functional, but not made for them. These are some findings from a study of the digital footprint of more than 2,000 U.S. local news outlets. While many studies have explored digital transformation of newsrooms through direct interviews, case studies and ethnography, this report attempts to tell the story of that transformation by the numbers.1 The study also offers comparative perspective between various sectors of local media—including radio and television broadcast, daily and weekly print, digital-native publishers and collegiate press.
·academiccommons.columbia.edu·
Digital Adaptation in Local News
Collaboration and the Creation of a New Journalism Commons
Collaboration and the Creation of a New Journalism Commons
The history of journalism includes many and varied forms of cooperation, as far back as landmark events such as the creation of the Associated Press by five New York newspapers in 1846 to share costs related to the coverage of the Mexican-American War. What sets the current phase of collaboration apart from previous ones is the wide diffusion of networked forms of organization and production, and the transformative impact of these cooperative practices in reshaping the new media world and its under- lying social and technological infrastructure as public utilities. This report explores the gradual development of this phenomenon and the related development of a new commons for journalism, or a collection of shared resources and communities reconfiguring the material and cultural conditions of newswork as a social practice subject to dilemmas that require cooperation. The journalism commons, often going unrecognized in the academic and public discourse on the future of media, offers a framework to make sense of the new schemes of human relations, production, and governance.
·academiccommons.columbia.edu·
Collaboration and the Creation of a New Journalism Commons
The Audience in the Mind's Eye: How Journalists Imagine Their Readers
The Audience in the Mind's Eye: How Journalists Imagine Their Readers
The conventional wisdom of the digital era is that journalists can now know their audiences in far more intimate detail than at any other time in the history of the profession. Previously, journalists based their audience knowledge primarily on their closest social circles. Now, new tools can help them solicit readers’ feedback, analyze and understand readers’ behavior, and open new channels for conversation. These new capabilities promise to shine a light on the abstract audience—making one’s readers present, quantified and real. Drawing on the existing literature and an original case study, this paper asks whether the new tools of the digital age have indeed influenced the “audience in the mind’s eye.” Our evidence indicates that for the most part, they have not. In reviewing findings from the case study, we were struck by how little seems to have changed since the print era. Although they seemed more open to audience knowledge, the ways in which these reporters thought about their audiences was remarkably similar to those reported in classic ethnographies of the 1970s. The paper concludes with some hypotheses about why this may be so, and offers some possible approaches to improve audience awareness in the newsroom—in particular, a new perspective on the necessity (and difficulty) of diversity. It is our hope that this paper will inspire future research and experimentation—to narrow the gap between the audiences journalists have in mind and the audiences they serve.
·academiccommons.columbia.edu·
The Audience in the Mind's Eye: How Journalists Imagine Their Readers
Guide to Open Source Intelligence (OSINT)
Guide to Open Source Intelligence (OSINT)
Open source intelligence, which researchers and security services style OSINT, is one of the most valuable tools to a contemporary reporter, because of the vast amount of publicly available online information. Reporters conducting OSINT-based research should aspire to use the information they gather online to peer behind the superficial mask of the internet—the anonymous avatars on Twitter, for example, or the filtered photographs on Instagram—and tell the story of the real, flesh-and-blood human beings on the other side of our screens. Every time we go online, we give up part of our identity. Sometimes, it comes in the form of an email used to make a Twitter account. Other times, it’s a phone number for two-factor authentication, or days’ and weeks’ worth of timestamps suggesting when a user is awake and asleep. Journalists can piece together clues like this and use them to tell stories which are of interest to the public. The following guide is written to provide a basic foundation not only for doing that work, but also for verifying the information, archiving findings, and interacting with hostile communities online. The closer we get to understanding the people who make the influential and newsworthy aspects of the internet happen—and their motivations—the easier our work of discovery becomes.
·academiccommons.columbia.edu·
Guide to Open Source Intelligence (OSINT)
Blockchain in Journalism
Blockchain in Journalism
Blockchain, like the internet, or democracy, or money, is many overlapping things. It is a decentralized record of cryptocurrency transactions. It is a peer-to-peer network of computers. It is an immutable, add-on-only database. What gets confusing is the way in which these overlapping functions override one definition or explanation of blockchain, only to replace it with an altogether different one. The conceptual overlaps are like glass lenses dropped on top of one another, scratching each other’s surface and confusing each other’s focal dimensions. This guide takes apart the stack of these conceptual lenses and addresses them one by one through the reconstruction of the basic elements of blockchain technology. The first section of this report gives a short history of blockchain, then describes its main functionality, distinguishing between private and public blockchains. Next, the guide breaks down the components and inner workings of a block and the blockchain. The following section focuses on blockchain’s journalistic applications, specifically by differentiating between targeted solutions that use blockchain to store important metadata journalists and media companies use on a daily basis, and hybrid solutions that include targeted solutions but introduce cryptocurrency, therein changing the journalistic business model altogether. Finally, the report speculates on the proliferation of what are known as Proof-of-Stake blockchain models, the spread of “smart contracts,” and the potential of enterprise-level and government-deployed blockchains, all in relation to what these mean to newsrooms and the work of reporters.
·academiccommons.columbia.edu·
Blockchain in Journalism
A Public Record at Risk: The Dire State of News Archiving in the Digital Age
A Public Record at Risk: The Dire State of News Archiving in the Digital Age
This research report explores archiving practices and policies across newspapers, magazines, wire services, and digital-only news producers, with the aim of identifying the current state of archiving and potential strategies for preserving content in an age of digital distribution. Between March 2018 and January 2019, we conducted interviews with 48 individuals from 30 news organizations and preservation initiatives. What we found was that the majority of news outlets had not given any thought to even basic strategies for preserving their digital content, and not one was properly saving a holistic record of what it produces. Of the 21 news organizations in our study, 19 were not taking any protective steps at all to archive their web output. The remaining two lacked formal strategies to ensure that their current practices have the kind of longevity to outlast changes in technology. Meanwhile, interviewees frequently (and mistakenly) equated digital backup and storage in Google Docs or content management systems as synonymous with archiving. (They are not the same; backup refers to making copies for data recovery in case of damage or loss, while archiving refers to long-term preservation, ensuring that records will still be available even as formatting and distribution technologies change in the future.) Instead, news organizations have handed over their responsibilities as public stewards to third-party organizations such as the Internet Archive, Google, Ancestry, and ProQuest, which store and distribute copies of news content on remote servers. As such, the news cycle now includes reliance on proprietary organizations with increasing control over the public record. The Internet Archive aside, the larger issue is that their incentives are neither journalistic nor archival, and may conflict with both. While there are a number of news archiving initiatives being developed by both individuals and nonprofits, it is worth noting that preserving digital content is not, first and foremost, a technical challenge. Rather, it’s a test of human decision-making and a matter of priority. The first step in tackling an archival process is the intention to save content. News organizations must get there. The findings of this study should be a wakeup call to an industry fond of claiming that democracy cannot be sustained without journalism, one which anchors its legitimacy on being a truth and accountability watchdog. In an era where journalism is already under attack, managing its record and future are as important as ever. Local, independent, and alternative news sources are especially at risk of not being preserved, threatening to leave critical exclusions in a record that will favor dominant versions of public history. As the sudden Gawker shutdown demonstrated in 2016, content can be confiscated and disappear instantly without archiving practices in place.
·academiccommons.columbia.edu·
A Public Record at Risk: The Dire State of News Archiving in the Digital Age
A Guide to Native Advertising
A Guide to Native Advertising
Native advertising is the central digital-revenue stream for the publishing industry. It makes up some 60 percent of the market, or $32.9 billion, according to a 2018 forecast by market research firm eMarketer. Understanding why the trend has been enthusiastically embraced by numerous news organizations requires a fuller appreciation of the changes that shape our information environment.
·academiccommons.columbia.edu·
A Guide to Native Advertising