Here’s a list of interesting things I read in March.
1. Unleashing the power of data visualization in music debates
Imagine walking into a bar and sparking a conversation by contemplating, “What defines the greatest album of all time?” This exercise in subjectivity resonates with many, yet often dissolves into the mumblings of agreeing to disagree. Enter The Pudding, an online journal that doesn’t just start these debates — it arms you with the sharpest stats to win them.
Have a gander: What makes an album the greatest of all time?
2. Exploring Negative Capability
Developed by John Keats, ‘Negative Capability’ is the ability to exist in a state of intellectual uncertainty and mystery. Lack of resolution does not disturb the mind in pursuit of the truth. In the backdrop of tech’s relentless push for definitive solutions, this essay by Ness Labs was a welcome respite for those swimming in the sea of innovation.
Read it: Negative capability: how to embrace intellectual uncertainty
3. Google, Gen AI, and publishers
A recent Adweek piece revealed Google is paying some in the digital publishing world to experiment with an unreleased Gen AI platform. Google is investing so much in so many areas, it’s hard to keep up!
Check it out: Google is paying publishers to test an unreleased Gen AI platform
4. The AI Learning Labyrinth: Selective Forgetting
The success of artificial intelligence often becomes a rabbit hole of datasets and forgotten inputs. Quanta Magazine’s piece on selective forgetting in AI learning really stood out. The crux of the essay pivots on the hypothesis that pruning excess information can actually clarify model learning.
Read the essay: How selective forgetting can help AI learn better
5. BBC AI usage guidance
Trust is the lifeblood of any media organization, and the BBC, with its responsibility to inform the public, wields it with the gravity it deserves. With Artificial Intelligence seeping into the editorial process, the corporation’s Guidelines on AI Usage aren’t just another rulebook; they’re a benchmark against which to reflect on our own
Scan the guidelines: BBC guidance: The use of Artificial Intelligence
6. Balancing the bias in Generative AI
Adobe shares its thinking about how to reduce biased and harmful outcomes from generative AI. The piece went beyond the token acknowledgment of AI biases, offering a blueprint that advocates for a combination of cultural competence, transparent development, and ongoing monitoring.
Understand Adobe’s approach: Reducing biased and harmful outcomes in generative AI