• 0 Posts
  • 236 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle
  • Oh, it’s totally freedom of speech. But freedom of speech doesn’t mean freedom to broadcast your speech on public property without exception.

    If they hung the banner on their house or private property, there would be nothing to be done to stop them.
    But you can’t hang a banner from the governments property without their permission, which must be given in a manner impartial to the content on the banner beyond any compelling interests like “no hanging very distracting banners where it could cause accidents”.

    They didn’t ask, so they can have their banner removed just as though they hung it from the flagpole in front of the courthouse.

    They’re being prosecuted because a racial component to a crime is an aggravating factor that makes it more appealing to prosecutors.
    So their claim is entirely correct: they’re being prosecuted because their crime was minor but made worse by being racist. We’ve already decided that it’s reasonable for the government to be particularly harsh on racist crimes because it singles out a type of behavior that’s particularly harmful to society.



  • Oh, to me it just doesn’t remotely look like they’re interested in surveillance type stuff or significant analytics.

    We’re already seeing growing commercial interest in using LLMs for stuff like replacing graphic designers, which is folly in my opinion, or for building better gateways and interpretive tools for existing knowledge based or complex UIs, which could potentially have some merit.

    Chat gpt isn’t the type of model that’s helpful for surveillance because while it could tell you what’s happening in a picture, it can’t look at a billion sets of tagged gps coordinates and tell you which one is doing some shenanigans, or look at every bit of video footage from an area and tell you which times depict certain behaviors.

    Looking to make OpenAI, who seem to me to be very clearly making a play for business to business knowledge management AI as a service, into a wannabe player for ominous government work seems like a stretch when we already have very clear cut cases of the AI companies that are doing exactly that and even more. Like, Palantirs advertisements openly boast about how they can help your drone kill people more accurately.

    I just don’t think we need to make OpenAI into Palantir when we already have Palantir, and OpenAI has their own distinct brand of shit they’re trying to bring into the world.

    Google doesn’t benefit by selling their data, they benefit by selling conclusions from their data, or by being able to use the data effectively. If they sell it, people can use the data as often as they want. If they sell the conclusions or impact, they can charge each time.
    While the FBI does sometimes buy aggregated location data, they can more easily subpoena the data if they have a specific need, and the NSA can do that without it even being public, directly from the phone company.
    The biggest customer doesn’t need to pay, so targeting them for sales doesn’t fit, whereas knowing where you are and where you go so they can charge Arby’s $2 to get you to buy some cheese beef is a solid, recurring revenue stream.

    It’s a boring dystopia where the second largest surveillance system on the planet is largely focused on giving soap companies an incremental edge in targeted freshness.




  • Yes, neither of us is responsible for hiring someone for the OpenAI board of directors, making anything we think speculation.

    I suppose you could dismiss any thought or reasoning behind an argument for a belief as “reasons” to try to minimize them, but it’s kind of a weak argument position. You might consider instead justifying your beliefs, or saying why you disagree instead of just “yeah, well, that’s just, like, your opinion, man”.


  • Those aren’t contradictory. The Feds have an enormous budget for security, even just “traditional” security like everyone else uses for their systems, and not the “offensive security” we think of when we think “Federal security agencies”. Companies like Amazon, Microsoft, and Cisco will change products, build out large infrastructure, or even share the source code for their systems to persuade the feds to spend their money. They’ll do this because they have products that are valuable to the Feds in general, like AWS, or because they already have security products and services that are demonstrably valuable to the civil security sector.

    OpenAI does not have a security product, they have a security problem. The same security problem as everyone else, that the NSA is in large part responsible for managing for significant parts of the government.
    The government certainly has interest in AI technology, but OpenAI has productized their solutions with a different focus. They’ve already bought what everyone thinks OpenAI wants to build from Palantir.

    So while it’s entirely possible that they are making a play to try to get those lines of communication to government decision makers for sales purposes, it seems more likely that they’re aiming to leverage “the guy who oversaw implementation of security protocol for military and key government services is now overseeing implementation of our security protocols, aren’t we secure and able to be trusted with your sensitive corporate data”.
    If they were aiming for security productization and getting ties for that side of things, someone like Krebs would be more suitable, since CISA is a bit more well positioned for those ties to turn into early information about product recommendations and such.

    So yeah, both of those statements are true. This is a non-event with bad optics if you’re looking for it to be bad.



  • It’s a bit of a non-story, beyond basic press release fodder.

    In addition to it’s role as “digital panopticon”, they also have a legitimate role in cyber security assurance, and they’re perfectly good at it. The guy in question was the head of both the worlds largest surveillance entity, but also the world’s largest cyber security entity.
    Opinions on the organization aside, that’s solid experience managing a security organization.
    If open AI wants to make the case that they take security seriously, former head of the NSA, Cyber command and central security service as well as department director at a university and trustee at another university who has a couple masters degrees isn’t a bad way to try to send that message.

    Other comments said open AI is the biggest scraping entity on the planet, but that pretty handily goes to Google, or more likely to the actual NSA, given the whole “digital panopticon” thing and “Google can’t fisa warrant the phone company”.

    Joining boards so they can write memos to the CEO/dean/regent/chancellor is just what former high ranking government people do. The job aggressively selects for overactive Leslie Knope types who can’t sit still and feel the need to keep contributing, for good or bad, in whatever way they think is important.

    If the US wanted to influence open AI in some way, they’d just pay them. The Feds budget is big enough that bigger companies will absolutely prostrate themselves for a sample of it. Or if they just wanted influence, they’d… pay them.
    They wouldn’t do anything weird with retired or “retired” officers when a pile of money is much easier and less ambiguous.

    At worst it’s open AI trying to buy some access to the security apparatus to get contracts. Seems less likely to me, since I don’t actually think they have anything valuable for that sector.



  • I think it stems from his argument being not about what the law says, but about if the law is constitutional.

    Very often things that involve political figures are much more likely to get a very generous interpretation of the first amendment, which is why you get stuff like “elected officials can’t always block people on social media even with their personal accounts”.

    They claimed that preventing him from trademarking the slogan limited his ability to monetize it, and that made it a limitation on his freedom of speech, specifically regarding a politician. Therefore the government should need to provide explicit, compelling reason for the law as applied to politicians. Recently, trademark rules were shot down over first amendment grounds when the supreme Court found that rules saying you can’t trademark insulting or vulgar things amounted to the government prohibiting speech in a way it’s not allowed to.

    With this ruling, they found that the rule in question is viewpoint neutral and therefore isn’t the government disfavoring an idea or viewpoint. It’s unbiased since it’s based on (hopefully) objective facts about if people are alive or not, unlike “is FUCT a vulgar word” or “is it disparaging to name a band The Slants”?


  • That’s the worst idea I’ve heard on so many levels.

    Drafting people is immoral.

    Also, it’s politically stupid because the draft is just… Extremely unpopular. Universal mandatory service will be radically less popular.

    Then, you’re filling the military with a bunch of people who don’t want to be there. Suddenly a sizable portion of the US military is composed of new recruits who don’t want to be there. If only half the people who come up for mandatory service actually get drafted, that’s still more people than are currently in the US military. This will do wonders for effectiveness and morale.

    Finally, once they get out, you have an insane amount of GI bill benefits to pay out, to say nothing of the long term VA costs that come from more than doubling the size of the military. (Potentially up to a 10x increase, assuming four year term of service and roughly 4M 18 year olds per year).
    Or you can change the law to deny GI bill benefits to draftees, which is definitely going to be popular with the people whose life you’re stealing.

    I suppose “draft everyone” is technically a way to give everyone subsidized college education and universal healthcare, but I think there’s better ways.

    Just the dumbest possible people.




  • I mean, it does learn, it just lacks reasoning, common sense or rationality.
    What it learns is what words should come next, with a very complex a nuanced way if deciding that can very plausibly mimic the things that it lacks, since the best sequence of next-words is very often coincidentally reasoned, rational or demonstrating common sense. Sometimes it’s just lies that fit with the form of a good answer though.

    I’ve seen some people work on using it the right way, and it actually makes sense. It’s good at understanding what people are saying, and what type of response would fit best. So you let it decide that, and give it the ability to direct people to the information they’re looking for, without actually trying to reason about anything. It doesn’t know what your monthly sales average is, but it does know that a chart of data from the sales system filtered to your user, specific product and time range is a good response in this situation.

    The only issue for Google insisting on jamming it into the search results is that their entire product was already just providing pointers to the “right” data.

    What they should have done was left the “information summary” stuff to their role as “quick fact” lookup and only let it look at Wikipedia and curated lists of trusted sources (mayo clinic, CDC, national Park service, etc), and then given it the ability to ask clarifying questions about searches, like “are you looking for product recalls, or recall as a product feature?” which would then disambiguate the query.


  • That’s fair. Being openly remorseless does tend to encourage the judge to give the full extent of what they’re allowed to do.

    I’m just cynical about anyone wanting to be the first person to sentence a former president to prison, and maybe finding any possible reasonable way to skirt over that for whatever reason or just “the good of the country”, justice or not.

    Or not, and they’ll just seize the opportunity to show that justice is blind.
    We’ll find out in July. 😊





  • Prisons, on paper, have a responsibility to ensure a degree of prisoner safety. The level of effort required to give a former president that safety is beyond what even a white collar criminal oriented prison is going to be able to easily provide without disruption. For example, who would be preparing his food? How many guards would have access to him while he slept?

    It’s possible to do, but it’s the sort of thing that could factor into the decision.