2/ I co-founded and ran @songkick for 8 years. Songkick was founded in 2007 to create a better fan and artist experience around live music. We were backed by YC, Sequoia etc
4/ In the end, the case was settled out of court, 2 weeks before trial for $130m. nytimes.com/2018/01/12/bus…. TicketMaster was required to pay a $10m criminal fine for intrusions into Songkick’s computer systems. justice.gov/usao-edny/pr/t…
5/ Songkick was an innovator. 10 million+ fans visited Songkick each month to discover concerts and get personalised concert listings based on the music they listened to.
6/ in early 2015 we did a 50:50 merger with CrowdSurge, a leading start-up helping artists sell tickets directly to fans. Some of CrowdSurge’s customers included @arcadefire, @childishgambino, @Muse and @PaulMcCartney.
7/ After the merger, we launched a new product that combined the capabilities of both start-ups. It was a tool that allowed artists to actively allocate tickets to their most active fans and block scalpers during the onsale.
8/ We launched it alongside @Adele's 2015 global tour - this was the biggest tour of the decade - comparable to Taylor Swift’s most recent onsale. The results were stunning.
10/ This was a MASSIVE breakthrough and we were excited to start scaling it alongside other iconic artists. Adele was able to get TicketMaster to change their rules and get much higher ticket allocations to sell directly. Others were excited to follow her lead.
11/ Ultimately this was a product that would have radically changed the concert industry - it showed how artists and fans could come together for a better ticketing experience and it was launched as part of the largest tour of the decade.
12/ That wasn't what happened, instead Songkick was forced onto the defence and sued LiveNation/TicketMaster for abuse of market power. Original complaint online here: storage.courtlistener.com/recap/gov.usco…
13/ The final $130m settlement was a big outcome for a start-up taking on a monopolist. Until that date, no private plaintiff has ever been able to proceed past summary judgment against Ticketmaster on any antitrust claim, let alone up to the eve of trial quinnemanuel.com/the-firm/our-n…
14/ However for me, it felt like a huge failure. We weren't able to change the industry for the better, which is what every startup founder really cares about.
15/ As part of the settlement, the IP around this technology was acquired by TicketMaster. It endures as a ‘TicketMaster verified fan’ programme, but it feels like we would have a healthier concert industry if Songkick had been able to compete and scale this up independently.
16/ We saved the concert discovery service by selling it to Warner Music Group (where it continues to be used by millions of fans), but it was a stark lesson in how important active antitrust regulation is if you want innovation in a market.
17/ I believe the LiveNation / TicketMaster merger of 2010 was fundamentally bad for innovation in the concert industry. It allowed the largest concert promoter to combine with the largest primary ticket company. nytimes.com/2010/04/25/bus…
18/ Within a few years LN/TM was also the largest festival promoter in the world, the largest artist management company (themusicnetwork.com/guy-oseary-for……) and one of the largest secondary ticketing companies after being allowed to acquire Get Me In! and Seatwave.
19/ It is challenging to innovate in a market with this concentration of power. The only major new players in this space over the last decade are in the secondary ticketing market (Seatgeek, Viagogo), which usually ends up increasing the price of tickets. theguardian.com/money/2018/may…
20/ I am cautiously hopeful things may change, given the commitment of the Biden administration to stronger antitrust enforcement and leadership that includes Tim Wu (@superwuster), Lina Khan (@linakhanFTC), Jonathan Kanter (@JusticeATR), @amyklobuchar, @AOC and @matthewstoller
21/ shout out all the Songkickers past and present, we gave it our best shot.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
1/ It’s been one year since I was appointed Chair of the UK AI Safety Institute. In this time, we’ve built one of the largest safety evaluation teams globally and are already conducting pre-deployment testing. This is our fourth progress report
2/ Report: . Over the last year, we’ve gone from building a start-up inside Government, to shipping product. We’ve built one of the largest safety evaluation teams globally - with a team of over 30 technical researchers and counting.aisi.gov.uk/work/fourth-pr…
3/ These researchers are some of the great minds in the field. Our research leadership team includes @geoffreyirving, Professor Chris Summerfield and @yaringal as our Research Directors and Jade Leung as our Chief Technology Officer.
1/ I've just left the final session of the first ever global Summit on AI Safety, chaired by @RishiSunak and @michelledonelan. A thread on how it started vs how it’s going:
2/ How it started: we had 4 goals on safety, 1) build a global consensus on risk, 2) open up models to government testing, 3) partner with other governments in this testing, 4) line up the next summit to go further. How it’s going: 4 wins:
3/ Breakthrough 1: it used to be controversial to say that AI capability could be outstripping AI safety. Now, 28 countries and the EU have agreed that AI “poses significant risks” and signed The Bletchley Declaration: gov.uk/government/pub…
1/ The Taskforce is a start-up inside government, delivering on the mission given to us by the Prime Minister: to build an AI research team that can evaluate risks at the frontier of AI. We are now 18 weeks old and this is our second progress report: gov.uk/government/pub…
2/ The frontier is moving very fast. On the current course, in the first half of 2024, we expect a small
handful of companies to finish training models that could produce another significant jump in
capabilities beyond state-of-the-art in 2023.
3/ As these AI systems become more capable they may augment risks. An AI system that
advances towards expert ability at writing software could increase cybersecurity threats. An AI
system that becomes more capable at modelling biology could escalate biosecurity threats.
1/ 11 weeks ago I agreed to Chair the UK's efforts to accelerate state capacity in AI Safety - measuring and mitigating the risks of frontier models so we can safety capture their opportunities. Here is our first progress report: gov.uk/government/pub…
2/ The Taskforce is a start-up inside government, delivering on the ambitious mission given to us by the Prime Minister. Effective start-ups send regular investor updates, so here is ours: gov.uk/government/pub…
3/ As AI systems become more capable they may significantly augment risks. An AI system that advances towards human ability at writing software could increase cybersecurity threats. An AI system that becomes more capable at modelling biology could escalate biosecurity threats.
1/ I wrote in 2018 about how accelerating AI progress would create new geopolitical challenges: ianhogarth.com/blog/2018/6/13…
2/ And for the last 5 years I’ve co-authored @stateofaireport which has covered progress in the field in much more depth. We’ve always made it open access in the hope it might help be a bridge between academia, industry and government.
1/ Notable how three pioneers of deep learning ( recognised in their shared 2018 Turing award) have substantially diverged on how they assess risk from superintelligence:
2/ Yoshua Bengio was one of the leading signatories to the open letter calling on "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4" futureoflife.org/open-letter/pa…
3/ @geoffreyhinton acknowledges that his timelines to AGI have "quite recently" shifted and "we have to think hard about how to control it"