![](https://fedia.io/media/e3/38/e338dd6a5cf478a1e6e1897cbec8b0f1b43276c2b4d1672aff153f09c10fa33d.webp)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
You can’t prove there isn’t!
You can’t prove there isn’t!
“Oh yeah, are you sure about that? Then why does my AirTag say it’s already landed on Jupiter, hmm? I’d like to speak to your manager.”
No wrongdoing was found!
On surface level, sure. But the consequences of this drama will be seen across the industry. Google and Amazon both appear to have worked to have kept this quiet, which raises a lot of important questions about how the business of video streaming will handle child safety.
Even though it may seem like drama, there’s an important story here, and it directly involves the futures of some key players in the tech sector so it’s relevant.
I’m 100% certain that if he actually didn’t know the kid’s age beforehand, that he would have said so in one of his responses. If the situation was anything other than what everybody is already suspecting, he would have put it out there instead of letting the internet speculate wildly. He wouldn’t just be sitting on that little nugget of information if it existed. He was too specific in his responses to have left that out unintentionally.
Yeah, and there’s a much different context. Those aren’t real children on the show. Those text threads are with adults on both ends. The entire interaction from start to finish is mitigated by professionals.
We’re talking about a situation involving a real child, not a sting operation where there isn’t an actual victim. There’s a real child whose identity would be put at risk of being exposed by releasing the logs.
This isn’t primetime TV drama. This is a real situation involving a real minor. You should take a step back from the screen for a minute if you’re struggling to see the difference.
The amount of people eager to see a sexting thread with a child is fucking absurd.
I was pressed for time when filing this year, so I didn’t want to experiment with the IRS’ new program and just went with FreeTaxUSA since I knew it would be fast and have my data from last year saved. For those who tried the new program this year, how was it?
It holds up about as much as those “Not responsible for broken windshields” stickers on the back of dump trucks. Which is to say: not at all.
Conservatives need to come up with a better tactic than banning things that are already illegal.
The sign guy returned their deposit. The shirt guy wasn’t so kind.
Video source for those who don’t feel like clicking around.
They don’t go into detail, but I’d be real interested to see a breakdown of how this was made. It looks almost entirely like actual Sora output, with the exception of Geoffrey and the TRU logo, which I think are comped-in renders. But the rest of it all looks like genuine AI output, all the way down to a bit of R’lyehian text in a few places.
It’s honestly a little scary how good this looks. Granted, this was made by a professional media team who understand how these tools work and know how to use them better than anyone else, so of course it’s going to be good. But it won’t be long at all before this becomes the baseline.
I don’t think that’s the basis of their argument.
The RIAA alleges that the generators used the record labels’ songs to illegally train the models since they didn’t have the rights holders’ permission to use the recordings. But whether the companies needed that permission is unclear. AI companies have argued that the use of training data is a case of fair use, meaning they are allowed to use the recordings with impunity.
Emphasis mine. Their concern is that the music was used for commercial purposes, not how the music came into their possession. Web scraping is already legal, that’s never been a piracy issue.
Piracy isn’t the issue, I’m not sure if we’re referencing different things here.
How the developers came to possess the training material isn’t being called into question - it’s whether or not they’re allowed to train an AI with it, and whether doing so constitutes copyright infringement. And currently, the way in which generative AI works does not cross those legal boundaries, as written.
The argument the RIAA wants to make is that using copyrighted material for the purposes of training software extends beyond the protections of fair use. I believe their argument is that - even if acquired otherwise legally - acquiring music for the explicit purpose of making new music would be considered a commercial use of the material. Basically like the difference between buying an album to listen to with your headphones or buying an album to play for a packed concert hall, suggesting that the commercial intent behind acquiring the music is what makes it illegal.
I feel that this logic follows a common misconception of generative AI. Its output isn’t made from the training data. It will take inspiration from it, but it doesn’t just mix-and-match samples from the training materials. GenAI uses metadata that it builds based on that training data, but the data, itself, isn’t directly referenced during generation.
The way AI generates content isn’t like when Vanilla Ice sampled Under Pressure; it would be more like if Vanilla Ice had talent and could actually write music, and had accidentally written the same bass line without ever hearing Queen. While unlikely, it’s still possible, and I’m sure we’ve all experienced a similar situation; ie. you open a comment thread to post a joke based on the headline and see the top comment is already the exact same joke you were going to make… You didn’t copy the other user, and they didn’t copy you, but you both likely share a similar experience that trigger the same associations.
For the same reasons that two different writers can accidentally tell the same story, or two different comedians can write the same joke, two different musicians can write the same melodies if they have shared inspirations. In all of those instances, both parties can create entirely original materials own their own accord, even if they aren’t meaningfully unique from each other. The way generative AI works isn’t significantly different, which is why this is such a legally-murky situation. If generative AI were more rudimentary and was actually sampling the training data, it would be an open-and-shut copyright infringement case. But, because the materials the AI produces are original creations of its own, we get into this situation where we have to argue over where to draw the line between “inspiration” and “replication”.
Those mailer coupons are the only reason I ever order a pizza delivery anymore. The cost of delivery fees, tips, and the food itself keeps going up and it’s becoming harder to justify the purchase unless I’m getting a significant discount somehow.
I used to order pizza fairly frequently, too. Like once every 2-3 weeks or so. But it’s just so expensive now, I think it’s been probably 3 years since I’ve ordered one.
“The basic point is that [the AI companies’] model requires a vast corpus of sound recordings in order to output synthetic music files that are convincing imitations of human music,” the suits alleged. “Because of their sheer popularity and exposure, the Copyrighted Recordings had to be included within Suno’s training data for Suno’s model to be successful at creating the desired human-sounding outputs.”
Nope, there’s plenty of other ways for an AI to have created similar notes. Say you have Song A written by Steve. Steve grew up listening to a lot of John, who wrote songs B through Z. Steve spent his childhood listening to and being influenced by John, so when Steve eventually grows up to write Song A, it’s incredibly possible for it to contain elements from songs B through Z. So if an AI trains off of Steve it’s going to consequently pick up whatever habits Steve learned from John.
Just like how you picked up some habits from your parents, which they picked up from their parents… etc. You could develop a habit that started with an ancestor you’ve never met; who are you copying?
By design. The cruelty is the point.
There’s no “what if”, Satanic Panic never went away. If anything, it’s being boosted by social media.
I do wonder if that’s ever actually a legitimate concern for prison staffers, if they get an inmate that is exceptionally agile like this. Like, if a parkour champ gets locked up, do the guards take any additional precautions? Do they grease up all the walls and fences to make them extra hard to climb?
These, and other equally important questions, are what keep me up at night.