

There’s already been a summary judgment in this case ruling that the AI training activity was not by itself copyright violation.
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit before joining the Threadiverse as well.
There’s already been a summary judgment in this case ruling that the AI training activity was not by itself copyright violation.
To comply with copyright law, not to skirt it. That’s what companies that scan large numbers of books do. See for example Authors Guild v. Google from back when Google was scanning books to add to their book search engine. Framing this like it’s some kind of nefarious act is misleading.
What do you mean, “test the sturdiness of American alliances?” All this does is destroy the sturdiness of those alliances.
There’s no way to “force” anything, different clients are going to behave however they like. Maybe if you need that level of control the Fediverse isn’t the right platform to begin with.
Which fragment of the US?
It’s at http://127.0.0.1:5001/. It’s my sex box, though. And her name is Sony.
This is why I only confess my crimes to my local LLM.
I just want it for the memes.
Oh, there’s a competition? I’m curious what the high score will be.
Yeah, this is one of the few areas where Reddit does a better job than the Fediverse, IMO.
Looks like he already drew you in, though. Catch-22.
For folks who are lamenting that the Canada/US relationship has cratered because Trump was reelected, this is something that’s been an ongoing problem since 1982. Canada has been on the “abused” side of this relationship for a long time, it’s just sighed and endured it until now.
Indeed. I was in a thread on Reddit about exactly this subject, and it was truly bizarre how adamant a lot of people were about how you should not having a life jacket. They were pointing out all these things - you could get trapped inside your house, it doesn’t save you from being hit by debris, it doesn’t protect you against diseases that are in the water.
Yeah, those are all bad things. Don’t jump into floodwaters for fun! Stay out of the flood water if you can at all possibly manage it. But if I’m in a place where I might end up falling into floodwaters anyway, it’s far far better to have a life vest on than to not have it on.
Why is this any different?
The judgment in the article I linked goes into detail, but essentially you’re asking for the law to let you control something that has never been yours to control before.
If an AI generates something that does indeed provably contain a sample of a piece of music in a song you recorded, then yes, that output may be something you can challenge as a copyright violation. But if the AI’s output doesn’t contain an identifiable sample, then no, it’s not yours. That’s how copyright works, it’s about the actual tangible expression.
It’s not about the analysis if copyrighted works, which is what AI training is doing. That’s never been something that copyright holders have any say over.
Funny, for me it was quite heartening. If it had gone the other way it could have been disastrous for freedom of information and culture and learning in general. This decision prevents big publishers like Disney from claiming shares of the pie - their published works are free for anyone with access to them to train on, they don’t need special permission or to pay special licensing fees.
There was actually just a big ruling on a case involving this, here’s an article about it. In short: a judge granted summary judgment that establishes that training an AI does not require a license or any other permission from the copyright holder, that training an AI is not a copyright violation and they don’t hold any rights over the resulting model.
I’m assuming this case is why we have this news about Anthropic scanning books coming out right now too.
So, people were angry at them for pirating books. Now we find they actually purchased books to scan, and people are angry about that too.
The lawsuit between NYT and OpenAI is still ongoing, this article is about a court order to “preserve evidence” that could be used in the trial. It doesn’t indicate anything about how the case might ultimately be decided.
Last I dug into the NYT v. OpenAI case it looked pretty weak, NYT had heavily massaged their prompts in order to get ChatGPT to regurgitate snippets of their old articles and the judge had called them out on that.