This post from Julian Togelius about AI journalism. I agree with everything he says, and this is a particular soapbox issue for me. Pop science reporting is almost always bad, but given how sensationalist reporting on AI has become, from whatever article is warning us of the robot uprising this week to the sloppy reasoning of Cathy O’Neil’s Weapons of Math Destruction (which I actually recommend reading; it hits on some very important social issues that data scientists should at least be aware of, despite my strong disagreement with where she takes the line of argument).
Piggy-backing on that last one, I’m going to switch it up and post about something I emphatically do not like. This ridiculous story that’s been floating around on social media recently about Facebook’s AI that allegedly invented its own language. The fact that this happened is interesting, but the popular media and the increasingly-tiresome (at least in his stance on AI) Elon Musk have pitched it as some harbinger of evil sentient AI on the brink of enslaving us all. It’s just so sensationalist as to be laughable, except there are real consequences for how the public views my work, and with Musk and others calling for strong regulation of AI, real consequences for public perception translate into real consequences for policy inhibiting my ability to do good, productive stuff. The facts of this particular case should be unsurprising and unconcerning for anybody who understands the math behind deep learning, but sadly, we are greatly outnumbered by those who don’t.
Lambda step functions have been around for several months now, but I’m currently investigating them for use in production for the first time. They’re pretty badass. The more I dive into them, the more ideas I have about how we might use them to make machine learning more efficient.