[HN Gopher] Deliberately optimizing for harm
       ___________________________________________________________________
        
       Deliberately optimizing for harm
        
       Author : herodotus
       Score  : 354 points
       Date   : 2022-03-16 13:51 UTC (9 hours ago)
        
 (HTM) web link (www.science.org)
 (TXT) w3m dump (www.science.org)
        
       | nonrandomstring wrote:
       | Many years ago a colleague who works in defence told me about a
       | job posting he'd seen but was having a moral struggle with.
       | 
       | The opening was for "Lethality Engineer": Ideal candidate with
       | good physics and medical background.
       | 
       | I said that the main perk was that at least on Halloween he
       | wouldn't need to buy a costume. He could just go out as himself.
       | 
       | He didn't take the job.
        
         | stickfigure wrote:
         | I hope recent events have illustrated that if it weren't for
         | the people who develop lethal weapons, we (as in you and I)
         | would be helpless against the bullies of the world. Unilateral
         | pacifism is cute philosophy only when there are rough men
         | standing ready to do violence on their behalf.
        
           | prmph wrote:
           | Until the lethal weapons are turned on those (countries,
           | groups, people) who develop them. Kind of like gun owners are
           | more likely to be harmed (or harm others) by their own guns,
           | notwithstanding the arguments about personal protection used
           | to justify such ownership.
           | 
           | This has already happened with groups the US has armed in the
           | past. The US itself has been the bad guy sometimes.
           | 
           | There is no proper resolution to this struggle, and people
           | who are guided by their conscience should not be attacked for
           | having a "cute philosophy" that relies on "rough men standing
           | ready to do violence on their behalf."
        
             | pc86 wrote:
             | A sibling comment put it well that refusing to wrestle with
             | these important questions is the unethical position as it
             | just pushes the decision off onto other people. "Cute
             | philosophy" is a perfect way to describe that because it's
             | completely untenable if everyone were to think that way.
             | 
             | The gun thing is completely tautological though. Yes, if
             | you have a gun you're more likely to be injured by your gun
             | than someone who doesn't. How would someone who doesn't own
             | a gun be injured by their own guns in the first place? It's
             | like saying you're more likely to be in a car accident if
             | you own a car. Of course you are.
        
               | prmph wrote:
               | If everyone were to think that way there would be no need
               | for those weapons in the first place.
               | 
               | When I said gun owners are more likely to be harmed by
               | their Gus, I meant as opposed to using the gun to protect
               | themselves. Instead of an incident where the gun came in
               | handy, it is more likely the the gun is used in a
               | wrongful way or against oneself. I'm not sure where the
               | tautology is
        
           | cmurf wrote:
           | We are acting pretty helpless because the bully has nuclear
           | weapons.
        
           | MattGaiser wrote:
           | Especially since we cannot really help Ukraine with anything
           | but tech and they are outnumbered, so the only advantage we
           | can give is how much better our weapons are than Russian
           | ones.
        
           | daenz wrote:
           | I generally agree with this, however, the rough men willing
           | to do violence on our behalf are more and more becoming the
           | quirky scientists who are very disconnected from the actual
           | impact of their work. I think there's a big difference
           | between those types of people. It seems like people don't
           | feel the weight of violence as much as they used to. I
           | imagine this will increase as we develop more AI driven
           | weapons.
        
             | starwind wrote:
             | This doesn't square with my experience in defense. I worked
             | in software and we saw plenty of combat and aftermath
             | footage and were always aware that the design decisions we
             | made and the tools we built meant life-and-death for
             | someone. We did our best to make sure it was the right
             | people.
             | 
             | I'd add, the weight of violence--if anything--is going up.
             | People today are devastated when a dozen soldiers and
             | scores of civilians are killed in a suicide bombing or
             | urban conflict, but go back to Vietnam and those incidents
             | barely register because they happened all the time. The
             | number of people killed in any given armed conflict has
             | dropped quite a bit in the last 50-or-so years. (The Syrian
             | Civil War is one big exception.)
        
               | daenz wrote:
               | Thanks for sharing your experience. To your last point, I
               | think it's a tradeoff. The number of individuals getting
               | killed is going down, but we're closer than ever to the
               | ability kill _everyone_ more easily (beyond nukes:
               | weaponized viruses, etc). Scientists who are drawn to a
               | field of research may not be practically connecting the
               | dots about what they 're actually working on, or the full
               | implications of their work (eg: gain of function
               | research). These are the people I'm referring to when I
               | mention them not realizing the weight of violence that
               | they are contributing to.
        
             | phkahler wrote:
             | https://en.wikipedia.org/wiki/A_Taste_of_Armageddon
        
           | marvin wrote:
           | I strongly agree with this sentiment. _However_ , it is hard
           | for an ethical person to participate in developing war
           | technology when possession and usage of the weapons is purely
           | a political question, and history also has seen our side of
           | geopolitics commit atrocities.
           | 
           | My stance has previously been that I am unwilling to work on
           | weapons technology, because history has shown that these
           | weapons sometimes end up being used for an indefensible
           | cause. Then all of a sudden you're an accomplice to murder,
           | and getting away with it.
           | 
           | In the light of Russia's invasion of Ukraine, which is just a
           | continuation of its historical imperialism, working on
           | weapons is something I would be perfectly okay with and
           | probably even motivated to do. But stop for a moment and
           | think what a history of aggressive military actions does to
           | our society's ability to recruit for this important job.
        
             | gtirloni wrote:
             | You can be happily manufacturing weapons for a good cause
             | today only to see your government turn evil the next day.
             | Unfortunately people don't cluster around ideas but
             | geography and that's is out of control for most.
             | 
             | This does not mean we shouldn't do something but we have to
             | realize nothing is permanent and the fruits of our labor
             | can very well be misused the next day.
        
             | starwind wrote:
             | I was a reviewer for a book on ethical machine learning
             | that wasn't published. I'll never forget, the author stated
             | "don't work on anything that could cause harm." Here I am
             | reading this while working in defense being like "that's a
             | lazy and dumb position." Nearly anything in the wrong hands
             | could cause harm.
             | 
             | It's not unethical to work in the auto industry because
             | people can die in car accidents. It's not unethical to work
             | in the beer business because people can become alcoholics.
             | It's not unethical to work for a credit card company
             | because people can bury themselves in debt. And it's not
             | unethical to work in defense because the weapons may fall
             | into the wrong hands.
             | 
             | What's unethical is encouraging these problems and not
             | trying to prevent them. And yeah, it's hard to navigate
             | these ethical issues, but we're professionals like doctors
             | and lawyers and part of the reason we get paid like we do
             | is because we may have to wrestle with these issues.
        
               | marvin wrote:
               | I'm not sure I've sufficiently communicated the
               | background of my moral ambiguity here. I came of age
               | during the War on Terror; the years where Iraq,
               | Afghanistan and Syria were the primary fronts for Western
               | military power. The brutal necessity of the Western world
               | standing up to aggression against dictatorships was not
               | so obvious during these years; from my vantage point of
               | Western media the impression was that dirt-poor suicide
               | bombers were the biggest risk to our civilization. And we
               | were dealing with those with an aggression that at best
               | left a dubious aftertaste.
               | 
               | One could be excused during these two decades for
               | erroneously assuming that the world has for the
               | foreseeable future moved on towards trade and economic
               | competition, rather than wars of aggression. With nuclear
               | weapons ensuring the balance. It was probably naive, but
               | not helplessly naive. Against this backdrop, regularly
               | seeing weddings and maybe-civilians bombed from drones on
               | dubious intel, it doesn't seem like a childish or
               | cowardly stance to just turn one's back on the weapons
               | industry. I'd call that a reflected decision.
               | 
               | The same reasoning is almost palpable in European
               | politics, which made a 180 degree shift away from this in
               | the two weeks after Putin dispelled these notions. My
               | point is, it wasn't obvious from where I stood that we
               | would be back here today. Now that we are, the calculus
               | seems clearer.
               | 
               | Maybe with a more measured US-led use of military force
               | since 2000, Western defense politics wouldn't have
               | required so much hand-wringing.
        
             | fossuser wrote:
             | Like anything difficult there are real risks and trade-
             | offs, but just refusing to engage in difficult pragmatic
             | issues is not the ethical position imo, it's just the easy
             | one that feels good. It puts the burden of actual complex
             | ethical decisions onto other people.
             | 
             | The west needs the capability to defend the ideals of
             | classical liberalism and individual liberty. In order to do
             | that it needs a strong military capability.
             | 
             | https://zalberico.com/essay/2020/06/13/zoom-in-china.html
        
               | gtirloni wrote:
               | By your logic, engaging in weapon manufacturing is the
               | only accepted conclusion. In fact, people that refuse to
               | do so are participating just fine, even though you don't
               | agree with their contribution.
        
               | fossuser wrote:
               | My logic is that refusing to engage is not an ethically
               | superior position when the capability is necessary.
               | Engaging in difficult, high-risk, but necessary issues as
               | best you can is.
               | 
               | That doesn't mean everyone needs to work on weapons, just
               | that the work on weapons is necessary and those that do
               | it are not ethically compromised in some way. It's just a
               | recognition of this without pretending not engaging is
               | somehow more morally pure. Not engaging is just removing
               | yourself from dealing with the actual hard ethical
               | issues.
        
               | chasd00 wrote:
               | an interesting thought given the politics of the day. If
               | you are not actively engaged in weapon manufacturing are
               | you not complicit in the murder of the Ukrainian people?
               | If you are not actively helping to supply the Ukraine
               | army with weapons for their defense then, by your
               | inaction, are you enabling their death?
        
               | phkahler wrote:
               | >> It puts the burden of actual complex ethical decisions
               | onto other people.
               | 
               | People who may not have even considered the ethical
               | situation. It seems the people who are concerned about
               | the ethics or morality of a necessary but questionable
               | job are exactly the ones you want in that role (although
               | not activists who would try to shut it down entirely).
        
           | AkshatM wrote:
           | This comment is confusing two completely separate things.
           | There's a _world_ of difference between not being willing to
           | defend yourself and actively trying to come up with more
           | aggressive and lethal weapons.
           | 
           | The argument that "we need defense!" only justifies the need
           | to stockpile and develop _sufficiently_ lethal and tactical
           | weaponry to neutralize incoming threats (like anti-ballistic
           | systems). It doesn't justify inventing deadlier weapons. No
           | dispossessed victim of foreign invasion has ever needed a
           | bioweapon to assert themselves, and there's no chance
           | developing one would ever be used for anything but war
           | crimes. You should absolutely turn down roles like "Lethality
           | Engineer" from an ethical standpoint, even if you agree
           | military defense is necessary.
           | 
           | People raise the spectre of deterrence as a utilitarian
           | justification for needing more powerful weapons ("har har,
           | they'll think twice about attacking us if they know we have
           | nukes!"). But that's narrow thinking. Deterrence can be
           | achieved in other less-damning ways, like strategic alliances
           | and building more robust defense systems.
           | 
           | tl;dr defence != deadlier offence.
        
           | yosamino wrote:
           | Have you considered, though, that we (as in you and I) might
           | _be_ some of these bullies in the world and that these rough
           | men aren 't just standing by, but are actively doing violence
           | on our behalf ? One needn't look much further past recent
           | events to find examples aplenty.
           | 
           | I understand the point you are trying to make, but it's not
           | as easy as pretending that the weapons "we" develop are
           | purely for morally and ethically righteous purpose.
        
           | nonrandomstring wrote:
           | With respect you may be making some unfounded assumptions
           | about what I've built and what I believe.
           | 
           | My point was really about the fact that this job title
           | "Lethality Engineer" actually exists. And moreover, that it
           | asked for medical qualifications, which would go against any
           | doctor's Hippocratic oath.
           | 
           | Most of us who've done defence related work are happy at the
           | edges, with tactical information systems, coms or guidance
           | (my stuff ended up in targeting).
           | 
           | But when it comes down to figuring out how fragments can be
           | arranged around a charge to make sure the waveshape optimally
           | penetrates as many nearby skulls as possible... hmmm suddenly
           | not so gung-ho about it.
           | 
           | That's not a distant, theoretical morality about tyrants and
           | bullies. I've no problems contemplating my family's military
           | history and am plenty proud of it, even though we'd all
           | rather live in a world without this stuff.
        
           | captainmuon wrote:
           | The flaw in that logic is that, if it weren't for the people
           | who develop lethal weapons for the _bullies_ , we wouldn't
           | have to fear the bullies.
           | 
           | Also, I think the design space of "radical defense" is under
           | explored. Our (western) armies are still designed for attack
           | and force projection, although we have long since renamed our
           | war ministers secretaries of defense.
           | 
           | But I wonder if you could develop defense capability to make
           | your country _unattackable_. Not by threat of retaliation,
           | but for example by much much stronger missile defense. Or by
           | educating ( "indoctrinating") your own population, so that an
           | occupier would not find a single collaborator? Or by mining
           | your own infrastructure, and giving every citizen basic
           | combat training (a bit like the swiss)? Or by fostering a
           | world-wide political transformation that is designed to
           | prevent wars from happening at all?
           | 
           | I think if we wanted to spend money researching stuff to keep
           | us safe, it doesn't necessarily have to be offensive weapons.
        
             | pc86 wrote:
             | The flaw in _this_ logic is somewhat related to law
             | enforcement, in that if your military is min /maxed for
             | defense, someone who wants to do you harm only has to be
             | right once in order to actually do you harm. Looking at
             | nuclear weapons and missile defense (ignoring the existence
             | of dirty bombs etc.), your opponent needs to only be right
             | once for one of your cities and hundreds of thousands of
             | civilians to be gone. And likewise, if you've focused on
             | defense you're likely wholly unprepared for any sort of
             | retaliation.
             | 
             | The Swiss approach what with literally bunkering in the
             | mountains and everything is interesting, but the logistics
             | for larger countries would be exponentially harder (and
             | most lack the geographic help). "Fostering world-wide
             | political transformation" is so pie in the sky it's
             | honestly not worth serious discussion. It's fanciful.
             | 
             | Someone will always be willing to make weapons for the
             | bullies because a lot of people don't view them as bullies
             | in the first place. Ask people in Iraq, or Chechnya, or
             | Ireland, or Pakistan, or Taiwan, who the bullies are, and
             | you'll get wildly different answers that will cover
             | approximately 90% of the worldwide population.
        
             | msla wrote:
             | > The flaw in that logic is that, if it weren't for the
             | people who develop lethal weapons for the bullies, we
             | wouldn't have to fear the bullies.
             | 
             | You can't uninvent weapons, and you can't prevent the
             | bullies from making their own weapons.
             | 
             | The problem with an impenetrable defensive shield is that
             | it gives your potential enemies the heebie-jeebies
             | (technical geopolitical term) that, now that you have the
             | shield, you can attack them without fear of reprisal. If
             | the enemy thinks you're working on a credible shield (or
             | even a shield you think is credible) their best option is
             | to attack _now_ before you, emboldened by your sense of
             | invulnerability, attack them.
        
               | germinalphrase wrote:
               | This is an ongoing concern of US weapons policy. By
               | refusing to back down from improving our missile defense
               | capabilities, we undermine MAD and our adversaries'
               | willingness to engage in disarmament (thereby making it
               | more likely these weapons will be used).
        
             | dragonwriter wrote:
             | > The flaw in that logic is that, if it weren't for the
             | people who develop lethal weapons for the bullies, we
             | wouldn't have to fear the bullies.
             | 
             | False. Bullies are a problem even if no one has weapons
             | beyond what can be grabbed and used from the environment
             | without any invention. Heck, bullies are a problem if
             | everyone just has the weapons built in to their bodies.
             | 
             | > But I wonder if you could develop defense capability to
             | make your country unattackable.
             | 
             | Not without incidentally developing a huge edge in
             | offensive weapons that would make you attackable when it
             | inevitably diffused to others. Uniquely defensive
             | technology mostly doesn't exist.
             | 
             | > Not by threat of retaliation, but for example by much
             | much stronger missile defense.
             | 
             | Much better interceptor missiles mean the technology for
             | much better missiles generally. Directed energy
             | interception means direct energy weapons. Hypervelocity
             | kinetic interceptors are general purpose hypervelocity
             | kinetic weapons.
             | 
             | > Or by educating ("indoctrinating") your own population,
             | so that an occupier would not find a single collaborator?
             | 
             | That kind of indoctrination can also be used offensively,
             | but the enemy doesn't need collaborators to attack you.
             | (They might need it to conquer without genocide, but
             | attackers willing to commit genocide for land are not
             | unheard of, nor are attackers whose goal isn't conquest.)
             | 
             | > Or by mining your own infrastructure, and giving every
             | citizen basic combat training (a bit like the swiss)?
             | 
             | Mining your infrastructure is itself creating a
             | vulnerability to certain kinds of attacks.
             | 
             | > Or by fostering a world-wide political transformation
             | that is designed to prevent wars from happening at all?
             | 
             | It's been tried, repeatedly. The League of Nations, the
             | Kellogg-Briand Pact, the UN. It'll be nice if someone ever
             | finds the "one wired trick to prevent war forever", but it
             | seems distinctly improbable and particularly suicidal to
             | bank your defense on the ability to find it.
        
         | mnw21cam wrote:
         | A friend is a very good university lecturer in physics, and a
         | pacifist. He isn't particularly please about the fact that a
         | decent number of his students will turn the particular lessons
         | he teaches towards the production of weapons.
        
         | criddell wrote:
         | Lethality Engineer? Is that a P.Eng. kind of position? If your
         | work doesn't actually kill anybody, could you be sued for
         | malpractice and lose your license?
        
           | nonrandomstring wrote:
           | They don't take anybody, the interview is murder.
        
         | memling wrote:
         | > Many years ago a colleague who works in defence told me about
         | a job posting he'd seen but was having a moral struggle with.
         | 
         | This is a good struggle to have. What's ironic in many cases is
         | that we don't experience these quandaries in other jobs, but
         | the ethical and moral ramifications _still exist_. The early
         | days of search in Google or social in Facebook probably didn 't
         | elicit the same kind thought process as a lethality engineering
         | post. (Anecdotally I spoke some years ago with an acquaintance
         | Googler who told me that he enjoyed working there precisely
         | because he was working on privacy issues that worked against
         | some of the advertising side of the business.)
         | 
         | I've worked in telecommunications, industrial systems
         | engineering, and energy. There are ethical and moral issues in
         | the work that I've done/do as a contributor in each of those
         | domains, even though I'm not involved day-to-day in decision
         | making that feels particularly moral.
         | 
         | One of the base assumptions we probably need to make in our
         | work is that whatever we do will always be misused in the worst
         | possible way. If we explore that idea, it might give us some
         | sense for how to structure our output to curtail the worst of
         | the damages.
        
           | Mezzie wrote:
           | > The early days of search in Google or social in Facebook
           | probably didn't elicit the same kind thought process as a
           | lethality engineering post.
           | 
           | It did for at least one person (me). I was 16 in 2004 with 11
           | years of dev experience, trying to decide whether to go out
           | to SV, go to college for CS, or do something else. I was from
           | the same city/community as Larry Page and in Zuck's age
           | group, so it wasn't an absurd consideration to try. Lots of
           | things went into my decision to do something non-CS related
           | for college, but morals were one of the reasons I didn't go
           | to SV (I objected to the professionalization of the web +
           | Zuck creeped me out + I didn't agree with cutting out
           | humans/curators from the search process like Google did).
           | 
           | It's just that until very recently, people either thought I
           | was lying OR that I was just batshit insane. Who is invited
           | to a gold rush and _doesn 't go_?
           | 
           | I can't imagine I was the only one.
        
             | memling wrote:
             | > I can't imagine I was the only one.
             | 
             | I'm sure not, and hopefully the description I provided
             | isn't a blanket one. And, to be clear, I'm also not trying
             | to say that working for any of those organizations is _per
             | se_ unethical. I don 't think that this is the case.
             | 
             | The point, rather, is that ethical and moral considerations
             | are actually much nearer to us than might appear at first
             | blush. Sometimes this happens by the mere nature of the
             | work (killing people more efficiently) and sometimes by
             | scale (now when we surface search results, we make direct
             | impacts on what people learn, where they shop, how they
             | receive advertisements, etc., _none of which was true in
             | 1999_ ). Navigating this isn't easy (indeed, you can make
             | an argument that there is a morally good outcome for
             | killing people more efficiently; I'm not saying it's
             | necessarily a good one, but that one can be made), but we
             | don't routinely equip people to think about it.
             | 
             | To make matters worse, our cultural assumptions shift over
             | time. The Google/Facebook difference is illustrative. Page
             | and Brin are a generation older than Zuckerberg, and their
             | assumptions about what it means to be moral are probably
             | not the same. These assumptions also change based on
             | circumstance--when we scale a business from a garage to a
             | billion dollars, it's hard to maintain the True North on
             | your moral compass (assuming such a thing exists).
             | 
             | Anyway, I think a deep skepticism about human nature and
             | the utility of technology is probably very useful in these
             | situations.
        
             | robocat wrote:
             | But is the world better off if moral people avoid immoral
             | jobs?
             | 
             | I believe the world shows there is plenty enough supply of
             | talented people that are willing to do immoral jobs. So
             | removing yourself from the pool of candidates makes little
             | difference.
             | 
             | Alternatively, one could work in an immoral job and make a
             | difference from the inside.
             | 
             | Why not do that? Perhaps to feel impotently virtuous, or
             | perhaps the work couldn't be stomached by the virtuous, or
             | perhaps the virtuous but weak are scared of losing their
             | virtuousness...
        
         | hef19898 wrote:
         | I can relate to that. In my career I stepped out and into
         | defense, and it never really bothered me that much to be
         | honest. But then it was always things like fighter jets and
         | helicopters sold to NATO members, I never had to rationalize
         | that we build the weapons carrier and not the, e.g., missiles
         | that actually cause harm.
         | 
         | I always drew the line at small arms so. Way more people die
         | because of those, they end up in every conflict and there have
         | been too many scandals of those smalls arms manufacturers
         | circumventing export restrictions. Quite recently I added
         | supporting countries like Saudi and the UAE to that list, even
         | the job would have been _really_ interesting, providing highly
         | sophisticated training services to the Saudis is nothing I
         | could do and still look myself in mirror. And civil aerospace
         | is fun as well.
        
           | starwind wrote:
           | I worked in defense too, might go back. When I get calls I'm
           | like "I don't do work for the Saudi's or the DEA" and half
           | the time the recruiter is like "Uh, I said this job is for
           | Raytheon."
           | 
           | "Yeah, but who's their client?"
        
             | hef19898 wrote:
             | One way or the other, regardless of the company, properly
             | one if not both of those countries. Sure, those countries
             | are rich, I just hope that Ukraine showed us in the
             | Democratic west that certain values, like human rights,
             | shouldn't be compromised upon, which we all did in the last
             | decades.
             | 
             | I do understand why those companies chase Saudi and UAE
             | contracts, that's where the money is. Maybe that changes if
             | NATO members increase defense spending, it would be a nice
             | side effect, wouldn't it?
        
               | starwind wrote:
               | sadly, yeah, the big contractors work for anyone its
               | legal to work for. I'll just make sure I don't end up on
               | a program working for scumbags. If I get canned cause I
               | won't work for someone, I get canned and life goes on. My
               | security clearance is worth a whole lot to the right
               | person
        
         | xwdv wrote:
         | I think you were hard on him. There should be no ethical qualms
         | when our weapons are used on enemies who seek to kill us or
         | attack our interests.
         | 
         | Also, if an ethical person doesn't take this job, someone far
         | more unethical probably will. And they will raise no objections
         | if they should ever be necessary. Kind of like how a lot of bad
         | people become police officers when no one good wants to do it.
        
       | xycombinator wrote:
       | Is anybody searching for compounds that reduce evil intent?
       | Something that would mellow people out without causing
       | hallucinations. A mass tranquilizer? Not effective against lone
       | operatives but able to be deployed against an invading army.
        
         | orangepurple wrote:
         | You have to reach deep into the internet to find the original
         | recording of "PENTAGON BRIEFING ON REMOVING THE GOD GENE"
         | 
         | The amount of people that feel the need to "debunk" it makes it
         | all the more mysterious.
        
         | rossdavidh wrote:
         | I believe that's called a sedative.
         | 
         | Most armies aren't filled with people with evil intent; they're
         | filled with draftees who couldn't get out of it.
        
         | jerf wrote:
         | Of course they are. Among the "evil intent" they would reduce
         | is any desire to rebel against your government, so you bet all
         | big intelligence agencies are looking into it, for instance.
         | Science fiction wrote about this decades ago.
         | 
         | Fortunately, there's a lot of considerations involved in
         | deployment of anything. It's easier said than done to get
         | something of a medical nature into a population
         | surreptitiously, because it's hard to get a certain dose into
         | one person without someone else getting not enough and yet
         | someone else getting way too much. You'd have to come up with a
         | way of delivering a medical dose in a controlled fashion and
         | lie about it or something, you couldn't just sneak it into the
         | food/water reliably.
         | 
         | Further, just because someone can name the exact complicated
         | effect they'd like doesn't mean there's a drug that corresponds
         | to it. _Serenity_ , already mentioned, is a bit of silly
         | example in my opinion because such a large effect should have
         | been found during testing. But it does no good to pacify the
         | population such that they'd never dream of so much as
         | peacefully voting out the current leaders if the end result is
         | that nobody would ever dream of so much as having enough
         | ambition to show up to their jobs and you end up conquered by
         | the next country over without them even trying, simply because
         | they economically run circles around you. Or any number of
         | other possible second-order effects. In a nutshell, it's
         | dangerous to try to undercut evolution just to stay in power if
         | not everywhere decides to do so equally, because you'll be
         | evolved right out along with the society you putative rule.
         | Evolution is alive and well and anyone who thinks it's asleep
         | and they can screw around without consequences is liable to get
         | a lethal wakeup call.
        
         | roywiggins wrote:
         | It's been tried, sort of.
         | 
         | https://en.wikipedia.org/wiki/Moscow_hostage_crisis_chemical...
        
           | sva_ wrote:
           | The US also built bombs containing that agent, BZ, but
           | destroyed their stockpiles in 1989.
           | 
           | https://en.wikipedia.org/wiki/M44_generator_cluster
           | 
           | https://en.wikipedia.org/wiki/M43_BZ_cluster_bomb
           | 
           | > _The M44s relatively small production numbers were due,
           | like all U.S. BZ munitions, to a number of shortcomings. The
           | M44 dispensed its agent in a cloud of white, particulate
           | smoke.[3] This was especially problematic because the white
           | smoke was easily visible and BZ exposure was simple to
           | prevent; a few layers of cloth over the mouth and nose are
           | sufficient.[5] There were a number of other factors that made
           | BZ weapons unattractive to military planners.[5] BZ had a
           | delayed and variable rate-of-action, as well as a less than
           | ideal "envelope-of-action".[5] In addition, BZ casualties
           | exhibited bizarre behavior, 50 to 80 percent had to be
           | restrained to prevent self-injury during recovery.[5] Others
           | exhibited distinct symptoms of paranoia and mania.[5]_
        
         | ansible wrote:
         | Uh, I don't know about that...
         | 
         | https://en.wikipedia.org/wiki/Serenity_(2005_film)
         | 
         | On a more serious note, anything that's going to affect
         | behavior is going to have a dosage range. Too little absorbed,
         | and there won't be enough effect. Too much, and that will harm
         | / kill people in interesting ways.
         | 
         | With chemical weapons, you only worry about the bio-
         | accumulating enough to kill your enemies. An enemy receiving
         | more than a lethal dose isn't a problem.
        
         | okasaki wrote:
         | It's been suggested. eg.
         | https://www.vice.com/en/article/akzyeb/link-between-lithium-...
         | 
         | > The report states: "These findings, which are consistent with
         | the finding in clinical trials that lithium reduces suicide and
         | related behaviours in people with a mood disorder, suggest that
         | naturally occurring lithium in drinking water may have the
         | potential to reduce the risk of suicide and may possibly help
         | in mood stabilisation, particularly in populations with
         | relatively high suicide rates and geographical areas with a
         | greater range of lithium concentration in the drinking water."
        
       | rossdavidh wrote:
       | While it's worrying and worth thinking about, the track record of
       | using AI to generate pharmaceuticals to do good has been "mixed",
       | except really it's just been a bust. It may someday do great
       | things, but not much yet, and one silver lining is that AI-
       | generated toxins are unlikely to improve on the human-designed
       | ones, either.
       | 
       | "That is, I'm not sure that anyone needs to deploy a new compound
       | in order to wreak havoc - they can save themselves a lot of
       | trouble by just making Sarin or VX, God help us."
        
         | starwind wrote:
         | > the track record of using AI to generate pharmaceuticals to
         | do good has been "mixed", except really it's just been a bust.
         | 
         | Researchers have only been using AI for drug development for
         | like 6 years, I think it's way to early to call it a bust
        
           | rossdavidh wrote:
           | I guess I should have said "...thus far".
        
       | ansible wrote:
       | The article assumes that full development of a new chemical
       | weapon would require more development effort. With regards to
       | military usage: storable at room temperature, relatively easy to
       | manufacture from commonly available precursor chemicals, etc. [1]
       | 
       | How true is that? Are there components of this process that make
       | things easier now? Where I have chemical structure X, and a
       | system generates the process steps and chemicals needed to
       | produce X. How much of the domain in chemistry / chemical
       | engineering has been automated these days? What are the future
       | prospects for this?
       | 
       | [1] I _assume_ one of the design goals for a new chemical weapon
       | for military use is that it breaks down in the environment, but
       | not too quickly (like say in a week or a month). Though I suppose
       | if you want to just destroy civilization you would design for
       | longevity in the environment instead. And being able to seep
       | through many kinds of plastic if possible.
        
         | [deleted]
        
         | logifail wrote:
         | > Where I have chemical structure X, and a system generates the
         | process steps and chemicals needed to produce X.
         | 
         | Undergraduate chemistry students spend a fair amount of time
         | learning how to look at a novel structure X and by
         | disconnecting "backwards" it into simpler components, deduce a
         | route by which it might be synthesed "forward" in the
         | laboratory from readily available starting materials.
         | 
         | There's an excellent book on this, "Organic Synthesis: The
         | Disconnection Approach", by Stuart Warren.
        
           | dredmorbius wrote:
           | Interesting.
           | 
           | Were / are you a chem major?
           | 
           | Any other major topics or readings you could recommend for
           | someone wanting a general understanding of key concepts in
           | modern chemistry? I'd suppose generally: materials,
           | synthesis, o-chem, and chem-eng.
           | 
           | My own background: began a hard-science degree. One year
           | undergrad uni chem.
        
             | 323 wrote:
             | The field is called "process chemistry". A very big thing
             | in pharma:
             | 
             | > _Process chemists take compounds that were discovered by
             | research chemists and turn them into commercial products.
             | They "scale up" reactions by making larger and larger
             | quantities, first for testing, then for commercial
             | production. The goal of a process chemist is to develop
             | synthetic routes that are safe, cost-effective,
             | environmentally friendly, and efficient._
             | 
             | https://www.acs.org/content/acs/en/careers/chemical-
             | sciences...
        
         | [deleted]
        
       | h2odragon wrote:
       | > ricin is (fortunately) not all that easy to turn into a weapon,
       | and the people who try to do it are generally somewhat
       | disconnected from reality (and also from technical proficiency).
       | 
       | Of course that fact was no barrier to much hype about the
       | "dangers" it posed, either. I suspect the same now; that we have
       | more to fear from the fear junkie propaganda than the actual
       | facts.
        
         | once_inc wrote:
         | I personally fear the lone-wolf attack drastically reducing in
         | cost and effort. Where it would once be cost-prohibitive to
         | design and manufacture your own nerve gas or lethal virus,
         | these days with AI/ML and Crispr-cas and the like, it feels
         | like any intelligent, deranged person wanting to take as many
         | people to the grave with him has the tools to do just that.
        
           | bitexploder wrote:
           | I think this is inevitable and something we will grapple with
           | in coming decades. Especially around genetic engineering of
           | viruses.
        
           | WJW wrote:
           | Intelligent and deranged persons already have the tools to
           | make way more casualties with way less effort using guns
           | and/or explosives. The "problem" for them is that people who
           | get sufficiently deranged to think that killing a few hundred
           | (or even thousand) people will meaningfully solve the problem
           | they are upset about will also be sufficiently deranged that
           | their ability to reason coherently will be drastically
           | reduced.
        
             | wpietri wrote:
             | Would they? I'm not seeing that as necessarily true. The
             | Unabomber seems like a good example:
             | https://en.wikipedia.org/wiki/Ted_Kaczynski
             | 
             | Or look at mass shooting incidents: https://en.wikipedia.or
             | g/wiki/Mass_shootings_in_the_United_S...
             | 
             | The Las Vegas shooting was rationally planned and carried
             | out. He managed to shoot nearly 500 people, killing 60.
             | 
             | They did happen to pick conventional weapons. But is that
             | because of rational choice, or just familiarity and
             | availability? Imagine somebody like Kaczynski, but instead
             | of being an award-winning young mathematician, he was an
             | award-winning industrial chemist or genomics student.
        
               | uxp100 wrote:
               | Kaczynski did not optimize for death, really following
               | the lead of the shockingly common political bombing
               | campaigns of the 70s. The Las Vegas shooter might be a
               | better example.
        
               | wpietri wrote:
               | Sure, but I don't think that was a necessary outcome.
               | Consider this quote: "I felt disgusted about what my
               | uncontrolled sexual cravings had almost led me to do. And
               | I felt humiliated, and I violently hated the
               | psychiatrist. Just then there came a major turning point
               | in my life. Like a Phoenix, I burst from the ashes of my
               | despair to a glorious new hope."
               | 
               | I agree he went with something common to the time. But I
               | don't think that was a necessary outcome. After all, his
               | approach didn't achieve his goals, so we can't say his
               | sort of terrorism is any more rational than aiming for
               | something bigger. Indeed, the nominal goals he ended up
               | with, one could argue that mass-death terrorism is more
               | rational.
        
       | VLM wrote:
       | This will probably come in handy for industrial espionage type
       | tasks.
       | 
       | Lets say you had a nation-state enemy who eats a lot of some
       | ethnic ingredient. Come up with a cheap artificial flavor/color
       | or process that is optimize to give heavy consumers cancer in 30
       | years. Not in one year, that will show up in the approval
       | process. Then have an agent in the target country "discover" thru
       | random chance this really excellent food dye or whatever.
       | 
       | Now you kill half the population with cancer, you're gonna get
       | nuked in response, even non-nuke countries will be pissed off
       | enough to get nukes just to nuke the perpetrator. But lets say
       | you make the victims fat and sick and die a little younger just
       | enough to get 1% hit on economic growth...
       | 
       | Some people would say this is how we ended up with trans-fats and
       | margarine and vegetable oils in general or certain veg oils in
       | specific.
       | 
       | Certainly, corn syrup has caused more human and economic
       | devastation that fission, nerve agents, or most any WMD I can
       | think of...
        
         | tenebrisalietum wrote:
         | The problem with this is that this is basically genetic
         | engineering, you might successfully make a low-level economic
         | growth impact now, but future generations will be resistant to
         | the poisons as those weak against them die off. You are
         | securing your own demise long-term if you don't subject your
         | population to the same.
        
         | [deleted]
        
         | 0xdeadbeefbabe wrote:
         | This is far more fun than believing in an emerging accident.
         | You don't have to eat corn syrup, btw.
        
         | tgtweak wrote:
         | I guess if it's tasty then it's fair game.
        
       | toss1 wrote:
       | >>The thought had never previously struck us. We were vaguely
       | aware of security concerns around work with pathogens or toxic
       | chemicals, ...We have spent decades using computers and AI to
       | improve human health--not to degrade it. We were naive in
       | thinking about the potential misuse of our trade...
       | 
       | Of course now, the next step is to use the technology to
       | preemptively search for and develop antidotes to the new
       | potential weapons their tool has discovered.
        
       | xkcd-sucks wrote:
       | The "dual use" paper this is commenting on is the clickbait
       | equivalent of "encryption is for pedos", and maybe Derek's "not
       | too surprising" is code for "Science editors are not discerning
       | enough".
       | 
       | Like, this is the whole point of pharmacology: Predicting the
       | biological interactions of chemicals (what they do to biological
       | targets, how potent), and their ancillary physical properties
       | (solubility, volatility, stability etc). For example,
       | 
       | Optimizing for mu-opioid agonist activity gives you super potent
       | painkillers, drugs of abuse, and that stuff Russia gassed a
       | theater with to knock out / kill hostages and kidnappers (i.e.
       | fentanyl analogues)
       | 
       | Optimizing for inhibition of various proteases might give you
       | chemotherapy drugs with nasty side effects, or stuff with nasty
       | side effects and no known therapeutic use (i.e. ricin)
       | 
       | Optimizing for acetylcholinesterase inhibitor activity will turn
       | up nasty poisons which could be purposed as "nerve agents" or
       | "pesticides"
       | 
       | Optimizing for 5HT2a activity will give compounds that are great
       | for mapping receptor locations in brains, which are also drugs of
       | abuse, and which are also lethal to humans in small doses.
       | 
       | And the "predicted compounds not included in the training set"
       | thing is just table stakes for any predictive model!
        
         | Scoundreller wrote:
         | > 5HT2a
         | 
         | You sure you don't mean 5HT2b?
         | 
         | I mean, anything can be toxic with enough dose, but the
         | b-subtype agonists seem a lot more toxic than the a-subtype
         | agonists.
         | 
         | (Fun fact: 6-APB, a "research chemical" recreational substance
         | became an actual research chemical because of it had better
         | 5HT2b selectivity than what was previously used in lab
         | research)
        
           | xkcd-sucks wrote:
           | Was thinking of halogenated NBOMe series - observed in humans
           | to have a pretty narrow therapeutic index re: death, cheapish
           | synth, can be vaporized
           | 
           | But yeah 2b could be worse. Or many other targets as well
           | 
           | Funny thing, optimizing "research chemicals" for (1)
           | uncontrolled synthetic pathway and (2) potency is common to
           | Institutional, Druggie, and Terrorist researchers. None of
           | them want to go through the bureaucracy for controlled
           | substances and potency is good for [better controlled
           | experiments / smaller quantities to transport / more killing
           | power]
        
             | Scoundreller wrote:
             | researchers will really care about selectivity, but potency
             | can help with the amount of paperwork for sure (and cut
             | synth costs!)
        
       | openasocket wrote:
       | Fortunately we don't see any real work in the chemical and
       | biological weapons space anymore. While it would still be pretty
       | handy for terrorist groups, in actual warfare chemical weapons
       | aren't super useful. See
       | https://acoup.blog/2020/03/20/collections-why-dont-we-use-ch... .
        
         | VLM wrote:
         | > Fortunately we don't see any real work
         | 
         | Not really, having new developments all classified is not
         | helpful to anyone.
        
         | R0b0t1 wrote:
         | They are amazingly useful in real warfare. Drop nerve gas on a
         | city, walk in a couple days later. WMDs are the only way to
         | really take a country by force, and of all of them, chemical
         | weapons are the most palatable and also the easiest to produce.
         | 
         | Considering this, defense against them is at least mildly
         | important. A proper defense only exists by considering offense,
         | so they're still developing chemical weapons somewhere. The
         | modern hot topic is viruses and other pathogens.
        
           | someotherperson wrote:
           | That's not how it works, and they're not very useful at all.
           | The amount of actual product you need is non-trivial and at
           | that point you might as well just use modern conventional
           | munitions.
           | 
           | The reason why it fell out of favour isn't because it's
           | dangerous, it's because it was ineffective outside of TV and
           | film.
        
             | R0b0t1 wrote:
             | That's not how it works? That's all I get? I'd refer you to
             | the site guidelines, barging into a thread and going "NO U"
             | is not a real conversation.
             | 
             | A siege of a city is more impractical than it ever has
             | been. In ancient times a siege was conducted out of
             | necessity; it was the only way to kill everyone inside if a
             | population did not desire subjugation. Complete death of
             | those resisting you was typically the goal, with the slow
             | communication of antiquity leaving any resistance might
             | mean coming back to an army the next time you visit. It was
             | easier to depopulate the region and move your descendants
             | in.
             | 
             | We see echoes of this in modern times. We "took" Kabul at
             | extreme expense, but did not really "take" it as asymmetric
             | enemy forces continued to operate throughout the entire
             | country while the US occupied Afghanistan. Taking many
             | cities across a nation with advanced embedded weaponry is
             | going to be impossible. If it came down to it, such a
             | country would resort to area denial, like Russia did in the
             | Chechnya and Syria, leveling the cities instead of sweeping
             | them.
             | 
             | We don't see people deploying chemical WMDs not because
             | they are too expensive but because of political reasons,
             | and after that, because they don't have them due to
             | disarmament treaties. All it takes is someone deciding they
             | really want to win for all of it to change. You can deny a
             | huge area for weeks with a few chemical warheads. You can
             | make a city inhospitable using less materiel than it'd take
             | to flatten it.
        
               | openasocket wrote:
               | I'd invite you to read the article I linked:
               | https://acoup.blog/2020/03/20/collections-why-dont-we-
               | use-ch... . Generally speaking, if you need to take a
               | city you're better off using high explosives than
               | chemical weapons. It's well researched and cites sources.
        
               | R0b0t1 wrote:
               | And I'd invite you to re-read my comment. He agrees with
               | my main point:
               | 
               | > In static-system vs. static-system warfare. Thus, in
               | Syria - where the Syrian Civil War has been waged as a
               | series of starve-or-surrender urban sieges, a hallmark of
               | static vs. static fighting - you see significant use of
               | chemical weapons, especially as a terror tactic against
               | besieged civilians.
               | 
               | The Russians being in a similar situation because they do
               | not have equipment suitable for a highly mobile army (I
               | don't quite expect them to use them for reasons below,
               | but worth pointing out).
               | 
               | There's a lot wrong with his take. A lot of what he is
               | writing is unsourced conjecture. It's like saying man
               | portable missiles are irrelevant when you can have the
               | CIA topple their government and remove their will to
               | fight.
               | 
               | For one, conventional arms are horribly inefficient at
               | killing in the first place! It's thousands of rounds
               | fired for a confirmed kill, and the stat is equally as
               | bad for artillery. Any marginal improvement is a big
               | deal.
               | 
               | He does not convincingly separate their lack of
               | legitimate use from moral concerns. Developed nuclear
               | states don't use them for a lot of reasons, but a huge
               | issue is that chemical weapons are on the escalation
               | ladder. In the US's case it's also that we don't want to
               | kill indiscriminately. He so much as states this at one
               | point:
               | 
               | > In essence, the two big powers of the Cold War (and, as
               | a side note, also the lesser components of the Warsaw
               | Pact and NATO) spent the whole Cold War looking for an
               | effective way to use chemical weapons against each other,
               | and seem to have - by the end - concluded on the balance
               | that there wasn't one. Either conventional weapons get
               | the job done, or you escalate to nuclear systems.
               | 
               | > But if chemical weapons can still be effective against
               | static system armies, why don't modern system armies
               | (generally) use chemical weapons against them? Because
               | they don't need to. Experience has tended to show that
               | static system armies are already so vulnerable to the
               | conventional capability of top-flight modern system
               | armies that chemical munitions offer no benefits beyond
               | what precision-guided munitions (PGMs), rapid maneuver
               | (something the Iraqi army showed a profound inability to
               | cope with in both 1991 and 2003), and the tactics (down
               | to the small unit) of the modern system do.
               | 
               | I take no exception to this, but basically no large army
               | has encountered a case where they need quickly deployed
               | area denial that is different from landmines. A massive
               | retreat into the interior of a country may be such a
               | case, but you run into issues where a decapitation
               | against that state is probably going to be more
               | effective.
               | 
               | For what it's worth, this is why Russia's concern of NATO
               | countries walking up into it is nonsensical. It's just,
               | perhaps, they never realized how nonsensical it was, as
               | their defense planners do not have experience with a
               | highly dynamic army. (But oddly they seem to have _some_
               | idea of what might happen, as this is what likely led to
               | their development of nuclear /neutron mortars and
               | artillery. But any situation where those would come out
               | is going to be ICBM time anyway.)
        
           | amelius wrote:
           | They go against the Geneva Protocol, and it's not even
           | allowed to stockpile them so not even useful if you are a
           | terrorist with a death wish because then there are simpler
           | ways to end your problems.
        
             | csee wrote:
             | Russia used them extensively in Syria quite recently, so
             | the concerns are valid.
        
               | mardifoufs wrote:
               | What?! This is not true! I've never heard anyone claiming
               | russia did it. The mainstream consensus is that the
               | syrian government did it, while a minority thinks it was
               | either old stock getting released by accident or the
               | rebels doing it.
               | 
               | Do you have a source? Because even with all the
               | controversy surrounding international investigation and
               | the theories that have spawned around that, russia wasn't
               | even a possible suspect.
        
               | someotherperson wrote:
               | The allegations by OPCW are politicised[0] and based on
               | theoretical chemistry, i.e hexamine as an acid scavenger.
               | 
               | That is to say: neither the Syrian government nor Russia
               | have used chemical weapons in Syria. They haven't used
               | them because they are -- for all intents and purposes --
               | useless. If you want to take down a group of people in
               | flip flops and have access to a thermobaric[1] MLRS[2]
               | you're not going to break international law so you can
               | give one or two of them a scratchy throat with chlorine
               | payloads (if you're lucky).
               | 
               | [0] https://wikileaks.org/opcw-douma/
               | 
               | [1] https://en.wikipedia.org/wiki/Thermobaric_weapon
               | 
               | [2] https://en.wikipedia.org/wiki/TOS-1
        
               | ceejayoz wrote:
               | The assertions in the Wikileaks docs are contested, and
               | focus on a single incident when the war has had multiple.
               | 
               | https://www.bellingcat.com/news/mena/2020/01/15/the-opcw-
               | dou...
               | 
               | For example, that
               | https://en.wikipedia.org/wiki/Ghouta_chemical_attack
               | happened is not disputed by Russia; they dispute _who_
               | did it.
        
               | mardifoufs wrote:
               | Bellingcat has been pretty reliable but their
               | investigations around the chemical attacks were very...
               | flawed. It happened for sure, but their analysis of how
               | the events unfolded on the ground was so lacking (not
               | their fault, OSINT can only get you so far in a chemical
               | attack) that imo they probably should've just not
               | published their initial articles. Doesn't mean they can't
               | be right on the OPCW controversy, but it's still
               | something to keep in mind
               | 
               | But in any case, while yes there is a dispute around who
               | did it... Russia was never claimed to be the responsible
               | by anyone. The two options are either the syrian
               | government or the rebels, and that's true for all the
               | chemical attacks.
               | 
               | So the GP was completely wrong, Russia did not use
               | chemical warfare in syria!
        
               | ceejayoz wrote:
               | Given that the Syrian war is still ongoing, that seems to
               | debunk the idea that it's as easy as "Drop nerve gas on a
               | city, walk in a couple days later."
        
         | WinterMount223 wrote:
         | I guess developments are not published on Nature.
        
         | cogman10 wrote:
         | Just doesn't seem like it's worth it even for terrorists.
         | 
         | Why invest a bunch of time and effort making more and more
         | deadly poisons when we've already got a wide variety of them
         | that are cheap to manufacture, well known in how they work, and
         | don't cost a bunch of research money to uncover?
        
           | upsidesinclude wrote:
           | Hmmm, and then you have to ask, for whom is it worth it?
           | 
           | Who might _invent_ a bunch of time and effort in those areas?
        
           | chasd00 wrote:
           | from what i understand the delivery of a chemical or
           | biological weapon is the hard part. For most things, you
           | can't just pour it out on the ground to have a huge effect.
           | Somethings you certainly can, weapons grade Anthrax probably
           | just needs a light breeze to devastate a city but something
           | like that is beyond the reach of your average terrorist
           | groups.
        
           | openasocket wrote:
           | True, though you have to remember that threat is a social
           | construct and isn't necessarily a rational measure. The 2001
           | anthrax attacks killed 5 people, injuring 17, and shocked the
           | nation. As a direct result Congress put billions into funding
           | for new vaccines and drugs and bio-terrorism preparedness. If
           | 5 people were killed and 17 wounded in a mass shooting by a
           | terrorist, would we really have reacted as strongly?
           | 
           | If you wanted to install fear into a country, I think being
           | attacked by some custom, previously unknown chemical weapon
           | would scarier than sarin.
        
           | at_a_remove wrote:
           | At the risk of putting myself on a watchlist, They already
           | know that, so They have an eye on certain kinds of labware,
           | different precursors, and such. And They already have
           | antidotes to some of these poisons.
           | 
           | One could optimize for compounds with hard-to-monitor
           | precursors. Compounds that can be transported with low vapor
           | pressure and volatility, so they cannot be easily sniffed
           | out.
           | 
           | Or imagine a lethal compound with a high delay factor. Or
           | something with specifically panic-inducing effects, perhaps
           | hemorrhagic with a side effect of your skin sliding off in
           | great slick sheets. Another interesting high delay factor
           | compound might induce psychosis: have fun tracking where
           | these gibbering maniacs were a week ago.
           | 
           | With a sufficiently dark imagination, "needs" could be
           | identified for all sorts of compounds.
           | 
           | Remember, the goal is to throw a monkey wrench into the
           | gearwork of an opposing civilization, not necessarily to
           | _kill_. Fear of the unknown is very effective for this.
        
             | Enginerrrd wrote:
             | >Another interesting high delay factor compound might
             | induce psychosis
             | 
             | This kind of exists already. BZ gas is the well-known
             | delirium-inducing compound with a delay of several hours: (
             | https://en.wikipedia.org/wiki/3-Quinuclidinyl_benzilate#Eff
             | e...)
             | 
             | The effects are probably mostly temporary though.
        
             | VLM wrote:
             | > Or imagine a lethal compound
             | 
             | You'll get nuked (or similar WMD) for that.
             | 
             | Imagine a somewhat more realistic set of applications for
             | hot new research chemicals.
             | 
             | How about aircraft or shells or covert actors spray some
             | "thing" that shorts out electrical insulators 1000x more
             | often than normal. Or makes the vegetation underneath power
             | lines 1000x more flammable than normal vegetation. Our
             | power is unreliable causing a major economic hit both
             | directly and via higher electrical bills. If "they" want to
             | invade now the civilians won't have power and be more
             | likely to get out of the way long before the front line
             | troops arrive. I mean you could probably put nano-particles
             | of graphite in a spray can right now, then stand upwind of
             | a power station or substation, but I bet extensive research
             | would do better. A lot of high power electrical "Stuff"
             | relies on plain old varnish being inert for a long time ...
             | what happens if it wasn't? Again you shut down a country
             | they gonna nuke you, but what if electrical power
             | transformers and switching power supplies only last one
             | year on average instead of ten? Thats a huge economic and
             | maybe military strategic advantage but would you get nuked
             | back because some nation's TVs burn out in one year instead
             | of the carefully value engineered ten years?
             | 
             | How about a spray or microbe or whatever that screws up air
             | filters. Who cares, right? Well most troops (and cops) in
             | most countries have gas masks. Zap their masks via whatever
             | new magical method, then drop simple plain old tear gas the
             | next day or until logistics catches up, which will take
             | awhile assuming they even know they're damaged. Normally
             | when hit with CS, they'd mask up and the CS would have no
             | effect on mask wearers other than reduced vision, but now
             | the side that didn't get their masks ruined has a HUGE
             | tactical advantage.
             | 
             | If you make a bioweapon and kill half the population, they
             | gonna be PISSED and you're gonna get nuked. So try
             | something a little more chill. If your vitamin A reserves
             | are gone, your night vision is temporarily essentially
             | gone. Yeah for long term vit A deficiency you'll get long
             | term skin, general growth, and infection risk problems, but
             | if someone sprayed you with some weird compound that made
             | you pee out all your bodies stores of Vit A before tomorrow
             | morning, the only real short term effect would be night
             | blindness, and that would go away in a couple days with a
             | normal-ish diet or by taking a few days of multivitamin
             | pill or a couple supplement vit A pills. So spray the enemy
             | (and/or the civilians) and they can't see in the dark so
             | magical automatic curfew for the civvies and attack the
             | night blind military and absolutely pound them because
             | they're night blind and can't see your guys. If they have
             | NVGs then hit them at dawn/dusk when the NVGs won't work
             | completely correctly but they can't see without them
             | because of night blindness. Its temporary and never hurt no
             | one other than the opfor "owning the night" until the
             | victims figure it out or naturally recover, so at a
             | strategic / diplomatic level would a country nuke another
             | country because they couldn't see at night for a couple
             | days? Naw probably not. And you can imagine the terror
             | attack / psych warfare potential of leaflets explaining,
             | "we turned off your night vision for a couple days, now
             | obey or we shut off cardiac function next time" Either for
             | the government to use against civilians (think Canada vs
             | truckers) or governments to use against each other (China
             | vs Taiwan invasion or similar). Or give them temporary
             | weird fever sweats or turn their pee robins-egg blue or all
             | kinds of fun.
             | 
             | Now the above is all sci fi stuff I made up and AFAIK I'm
             | not violating any secrets act, unless this post magically
             | disappears in a couple hours LOL.
             | 
             | Think of the new non-lethal battlespace like computer virus
             | attacks. Yeah, we could "EMP" Russia to shut off most of
             | their computers and they'd be really pissed off and nuke us
             | right back so thats a non-starter. But release "windoze
             | annoyance virus number 31597902733" and that could have
             | real world effects. Especially if you release 20,30,4000
             | new zero-days on the same day.
        
               | whatshisface wrote:
               | > _Especially if you release 20,30,4000 new zero-days on
               | the same day._
               | 
               | Interesting example of how cyber attacks could blow back.
               | Anything you put in a virus can be taken out and used
               | against you.
        
               | KineticLensman wrote:
               | > How about aircraft or shells or covert actors spray
               | some "thing" that shorts out electrical insulators 1000x
               | more often than normal.
               | 
               | Dropping anti-radar chaff strips is a very good lo-tech
               | way of shorting transformers and power lines. I can't
               | find a link, but IIRC the USAF discovered this
               | accidentally when training missions led to power outages
               | in nearby towns.
        
             | logifail wrote:
             | > At the risk of putting myself on a watchlist, They
             | already know that, so They have an eye on certain kinds of
             | labware, different precursors, and such. And They already
             | have antidotes to some of these poisons.
             | 
             | Errm, maybe They Do.
             | 
             | On the other hand, I used to work in an organic chemistry
             | research lab, and at least within my ex university's
             | context, we could basically order anything we wanted from
             | the standard chemical suppliers without anyone batting an
             | eyelid. Pre-signed but otherwise blank order forms were
             | freely handed out, you just filled in what compounds you
             | wanted and handed it over, two days later it arrived and
             | you collected it from Stores.
             | 
             | I personally ordered a compound for a reaction I was
             | planning and it was only after it arrived - when I read the
             | safety data sheet - that I realised just quite how toxic it
             | was.
             | 
             | I backed carefully away from that particular bottle, and
             | left it in the fridge, still sealed. Then found another -
             | safer - way to do the reaction instead...
        
               | david422 wrote:
               | > I backed carefully away from that particular bottle,
               | and left it in the fridge, still sealed. Then found
               | another - safer - way to do the reaction instead...
               | 
               | I've wondered how manufacturing plants handle this. You
               | back away because you're afraid of touching the stuff -
               | how does a giant factory that produces and ships the
               | stuff handle it?
        
               | detaro wrote:
               | It clearly can be handled, the question is what's the
               | procedure to handle it correctly and do you trust _your_
               | procedure? Manufacturer or someone regularly working with
               | this kind of thing does know and trust, if you suddenly
               | realize it wasn 't quite what you signed up for backing
               | off is clearly the better choice than trusting your guess
               | at procedure. But risk can be managed a lot.
               | 
               | Although certainly over-confidence can also happen on the
               | other end, e.g. if something that's quite similar to
               | other dangerous things you work with suddenly has an
               | additional trap. And Safety Datasheets are notorious for
               | not necessarily representing actual in-use risks well.
        
               | 323 wrote:
               | The same way other dangerous stuff is made?
               | 
               | There is plenty of dangerous chemicals made on a huge
               | scale - sulfuric acid, cyanide, explosives, ...
        
               | emaginniss wrote:
               | Right, and if you had ordered 3 barrels of the stuff,
               | you'd get a visit from the feds.
        
               | logifail wrote:
               | > 3 barrels of the stuff
               | 
               | Barrels? Based on what the LD50 was / what the data sheet
               | said, the bottle I briefly had in my hands - and yes,
               | they did start shaking - would have done for a good
               | proportion of the residents of a small city had it
               | managed to be spread around in a form that would have
               | been ingested.
               | 
               | Chemistry labs are typically well-stocked with quite a
               | lot of fairly unpleasant things. They're also the places
               | where a lot of genuinely amazing and potentially live-
               | saving work gets done!
        
               | tetsusaiga wrote:
               | You can't just tease us like this! If not the actual
               | chemical... maybe an analogue or something? Chemistry is
               | one of my great fascinations lol.
        
               | dekhn wrote:
               | Is that toxic bottle still sealed, in the fridge, after
               | you've left the institution? I've had to deal with a few
               | EHS situations like that.
        
               | danuker wrote:
               | How did you deal with it? I would expect they don't take
               | returns.
        
               | dekhn wrote:
               | You call your university's EHS department and tell them
               | as much about what you know about the contents of the
               | bottle (which may not be what is on the label). They seal
               | off the lab, remove it, and using what they can determine
               | about the contents, destroy it safely.
        
               | throwaway0a5e wrote:
               | The same thing that happens when someone quits.
               | 
               | The people finding these things have the same skill sets
               | and access to the same handling/disposal facilities as
               | the people leaving these things so it's very much a "oh
               | my former coworker forgot to/didn't have an opportunity
               | to dispose of X before departing, I'll just do it myself
               | in the same manner he would have". Furthermore, these
               | people have lives, they go on vacation and cover each
               | other. The institutional knowledge of how to handles
               | dangerous organic things necessarily exists in the
               | institutions that do so.
        
               | dekhn wrote:
               | No bench chemist should attempt to cleanup this stuff. Go
               | to your university or company EHS, and if you don;'t have
               | that, your city does. The history of chemistry is filled
               | with responsible and intelligent organic chemists who
               | nonetheless died terrible deaths. EHS has strategies to
               | avoid this.
        
               | throwaway0a5e wrote:
               | Humor us all and think another few steps ahead. And
               | what's EHS gonna do?
               | 
               | They're gonna CC the guy who's office is right beside
               | yours because (surprise surprise) the departments and
               | teams who's work results in them having weird nasty stuff
               | buried in the back of the walk in fridge are the same
               | people who know how to handle it.
               | 
               | EHS is just a coordinator. They don't have subject matter
               | in everything. So they contact the experts. If your
               | biology department fridge with Space AIDS(TM) in it it's
               | because your department is the experts so you'll be
               | getting the call.
        
               | dekhn wrote:
               | Yes, I know how these things work, as my coworkers were
               | those EHS people. The point is that they had training,
               | and they are working within an official university
               | context (laws, etc).
        
               | throwaway0a5e wrote:
               | So why not save everyone the week of back and fourth
               | emails while nothing gets done and ask them directly how
               | they want to deal with it rather than putting tons of
               | people on blast and substantially constraining their
               | options by bringing intra-organization politics into the
               | mix?
        
               | QuercusMax wrote:
               | Sounds like a great way to get Normalization of
               | Deviance[1]. One senior person says "I know how to
               | dispose of this, so it's OK if I don't go through proper
               | channels." Then the next person, following their lead
               | without understanding the implications, says "Joe Senior
               | over there disposed of something scary they found without
               | wasting time going through EHS, so I'll do the same."
               | Maybe it goes fine for a while, but eventually you'll end
               | up with a situation where you've poisoned the groundwater
               | or released dangerous chemicals into the air, because
               | nobody is following the proper channels any more.
               | 
               | 1.
               | https://en.wikipedia.org/wiki/Normalization_of_deviance
        
               | throwaway0a5e wrote:
               | Telling your boss or relevant colleague instead of going
               | over everyone's heads from the get go isn't normalization
               | of deviance and we both know it.
               | 
               | I really dislike these sorts of "name drop" comments.
               | They're just equivalent of "F" or "the front fell off"
               | with a high enough brow for HN veneer on top.
        
               | QuercusMax wrote:
               | You're suggesting that people bypass official procedures
               | and/or laws in order to save time. This is a bad path to
               | start down. The fact that you're posting this as a
               | throwaway indicates that you don't want your HN account
               | associated with these proposals.
               | 
               | Here's a relevant software-related analogy:
               | 
               | I work in a situation where if we receive certain types
               | of data, we have to go through proper procedures
               | (including an official incident response team). It would
               | be very easy for me to say "I've verified that nobody
               | accessed this data, and we can just delete it," instead
               | of going through the proper channels, which are VERY
               | annoying and require a bunch of paperwork, possibly
               | meetings, etc.
               | 
               | Maybe nothing bad happens. But next time this happens,
               | one of my junior colleagues remembers that the 'correct'
               | thing to do was what I did (clean it up myself after
               | verifying nobody accessed the data). Except they screwed
               | up and didn't verify that nobody had accessed the data in
               | question - and now we are in legal hot water over a data
               | privacy breach.
               | 
               | And then people go back through the records, and both the
               | junior engineer and I get fired for bypassing the
               | procedures which we've been trained on, all because I
               | wanted to save some time.
        
               | throwaway0a5e wrote:
               | >You're suggesting that people bypass official procedures
               | and/or laws in order to save time. This is a bad path to
               | start down.
               | 
               | You are assuming rules say what they mean and mean what
               | they say (and are even written where you're looking, and
               | if they are that they're up to date). If it's your first
               | week on the job, by all means, do the most literal and
               | conservative thing. If it's not, well you should know
               | what your organization actually expects of you, was is
               | expected to be reported and what isn't.
               | 
               | There's a fine line to walk between notifying other
               | departments when they need to be notified and wasting
               | their time with spurious reports.
               | 
               | When maintenance discovers their used oil tank is a hair
               | away from being a big leaking problems they just fix it
               | because they are the guys responsible for the used oil
               | and keeping it contained is part of their job.
               | 
               | Your bio lab or explosives closet isn't special. If the
               | material is within your department's purview then that's
               | the end of it.
               | 
               | Not every bug in production needs to be declared an
               | incident.
               | 
               | >Maybe nothing bad happens. But next time this happens,
               | one of my junior colleagues remembers that the 'correct'
               | thing to do was what I did (clean it up myself after
               | verifying nobody accessed the data). Except they screwed
               | up and didn't verify that nobody had accessed the data in
               | question - and now we are in legal hot water over a data
               | privacy breach.
               | 
               | You can sling hypothetical around all you want but for
               | every dumb anecdote about informal process breaking down
               | and causing stuff to blow up I can come up with another
               | about formal process leaving gaps and things blowing up
               | because everyone thought they had done their bit. It's
               | ultimately going to come down to formal codified process
               | vs informal process. Both work, both don't. At the end of
               | the day you get out what you put in.
               | 
               | >The fact that you're posting this as a throwaway
               | indicates that you don't want your HN account associated
               | with these proposals.
               | 
               | This account is how old? Maybe I just use throwaways
               | because I like it.
        
               | dekhn wrote:
               | It sounds like you may have had a bad time with EHS in
               | the past. I found that by making friends with everybody
               | involved ahead of time, I suddenly had excellent service.
               | 
               | sadly, after 30 years of training to be a superhacker on
               | ML, my greatest value is actually in dealing with intra-
               | organizational politics.
        
               | QuercusMax wrote:
               | I work in a regulated software space, and my experience
               | is that treating quality and regulatory folks as
               | adversaries is a great way to have your projects take way
               | longer than they should and cause immense frustration.
               | Understanding the hows and whys of the way things work
               | makes life easier for everyone. I haven't worked with EHS
               | in the past, but I imagine it's much the same - if you're
               | seen as somebody who's trying to cut corners and take
               | shortcuts, yeah, you'll probably have a bad time.
        
               | mcguire wrote:
               | This is how you wind up spending many, many $ remediating
               | a building. And getting those weird questions like,
               | "Inventory says we have 500ml of X, anyone know where it
               | is?"
        
               | logifail wrote:
               | > This is how you wind up spending many, many $
               | remediating a building
               | 
               | Oh yes, and this isn't a new phenomenon, for instance:
               | 
               | "When Cambridge's physicists moved out of the famous
               | Cavendish laboratories in the mid-1970s, they
               | unintentionally left behind a dangerous legacy: a
               | building thoroughly contaminated with mercury. Concern
               | about rising levels of mercury vapour in the air in
               | recent months led university officials to take urine
               | samples from 43 of the social scientists who now have
               | offices in the old Cavendish. The results, announced last
               | week, show that some people have exposure levels
               | comparable to people who work with mercury in
               | industry."[0]
               | 
               | [0] _The mercury the physicists left behind_
               | https://www.newscientist.com/article/mg12817450-800-the-
               | merc...
        
               | logifail wrote:
               | > Is that toxic bottle still sealed, in the fridge, after
               | you've left the institution?
               | 
               | Quite possibly!
               | 
               | The previous occupant of my bench area (and hence
               | adjoining fridge space) left some barely-labeled custom
               | radioactive compounds(!!) in the fridge me to find
               | shortly after I took over that space, so I know how that
               | feels.
               | 
               | After consulting suitably-trained personnel, the contents
               | of the vials were then disposed of ... by pouring down a
               | standard sink, with lots of running water.
               | 
               | Those were the days :eek:
        
               | marcosdumay wrote:
               | "They" are not proactive, because they know people hiding
               | bad things need time and coordination. So, only taking
               | notice (and notes) and investigating strange patterns is
               | enough.
               | 
               | But also a lot of what the GP says doesn't apply, because
               | on the case of terrorism, "They" is either the police or
               | random people, so "They" definitively do not have
               | antidotes or training on how to handle known poisons.
        
               | Enginerrrd wrote:
               | Flourine compound? Organic heavy metal? I'm curious.
        
         | mcguire wrote:
         | The real fun starts when somebody starts using techniques like
         | this that overcome the weaknesses of known chemical weapons and
         | provide specific advantages. It's also kind of hard to monitor
         | computational chemical research.
         | 
         | It's my understanding that the Soviet army doctrine in the '70s
         | and '80s included the use of chemical weapons. That
         | hypothetical threat put a hell of a lot of friction on NATO in
         | terms of training, supplies, and preparedness.
        
         | derefr wrote:
         | On a tangent: it occurred to me recently that we also don't see
         | much use of ICBMs with non-nuclear payloads, despite these
         | being a fairly-obvious "dominant strategy" for warfare -- and
         | one that _isn 't_ banned by any global treaties.
         | 
         | I'm guessing the problem with these is that, in practice, a
         | country can't use any weapons system that could _potentially_
         | be used to  "safely" deliver a nuclear payload (i.e. to deliver
         | one far-enough away that the attacking country would not,
         | itself, be affected by the fallout) without other countries'
         | anti-nuke defenses activating. After all, you could always
         | _say_ you 're shooting ICBMs full of regular explosive
         | payloads, but then slip a nuke in. There is no honor in
         | realpolitik.
         | 
         | So, because of this game-theoretic equilibrium, any use of the
         | stratosphere for ballistic weapons delivery is _effectively_
         | forbidden -- even though nobody 's explicitly _asking_ for it
         | to be.
         | 
         | It's interesting to consider how much scarier war could be
         | right now, if we _hadn 't_ invented nuclear weapons... random
         | missiles just dropping down from the sky for precision strikes,
         | in countries whose borders have never even been penetrated.
        
           | nuclearnice1 wrote:
           | Conventional Prompt Global Strike is intended to provide the
           | ability to deliver a conventional kinetic attack anywhere in
           | the world within an hour. It has been an active area of
           | weapons research for the US for 20 years. As you speculate,
           | misinterpretation of the launch is a concern. [1]
           | 
           | As opensocket points out, there are many shorter range
           | conventional weapons used across borders. The cruise missiles
           | of gulf war 1 or the drones of the post September 11 world.
           | 
           | [1] https://sgp.fas.org/crs/nuke/R41464.pdf
        
           | the_af wrote:
           | > _It 's interesting to consider how much scarier war could
           | be right now, if we hadn't invented nuclear weapons... random
           | missiles just dropping down from the sky for precision
           | strikes, in countries whose borders have never even been
           | penetrated._
           | 
           | Why are cruise missiles any less scary? They are indeed used
           | in precision strikes across country borders, and can kill you
           | just the same. The existence of nuclear weapons still allows
           | some countries to use cruise missiles, as we see happen
           | almost every year.
        
           | dwighteb wrote:
           | Interesting tangent I hadn't considered before. However,
           | China is testing some ballistic anti ship missiles.
           | https://www.navalnews.com/naval-news/2021/11/aircraft-
           | carrie...
           | 
           | To be fair, if these become a reality, they would likely
           | strike targets in the Pacific Ocean and South China Seas, far
           | away from the US, but the potential to spook nuclear nations
           | is still there.
        
           | openasocket wrote:
           | There are quite a lot of shorter-range conventional systems.
           | In practice you don't need that inter-continental range for
           | most purposes. For some modern examples you have the Chinese
           | DF-21 and the Russian Iskander system. And a lot of those
           | systems are dual-use: capable of delivering both nuclear and
           | conventional payloads. It's not totally clear what that will
           | mean in a conflict between two nuclear powers. What do you do
           | when early warning radar picks up a ballistic missile coming
           | in when you can't tell if it is nuclear or conventional? Plus
           | this isn't a video game, you won't hear some alarm going off
           | after it detonates indicating it was a nuclear explosion.
           | You'll need to send someone to do a damage assessment, and
           | that takes time.
        
             | [deleted]
        
             | radicaldreamer wrote:
             | We have satellites which can detect a double flash
             | (characteristic of a nuclear explosion), the US and
             | probably most other nuclear powers with the exception of
             | perhaps North Korea and Pakistan would know instantly of
             | any nuclear detonation above ground.
        
               | jhart99 wrote:
               | Not to mention the net of seismographs across the US.
               | Those would tell us within our own borders if a nuclear
               | detonation has occurred within seconds of impact.
        
           | upsidesinclude wrote:
           | I'd consider how much more docile the nation's of the world
           | would have become sans nuke.
           | 
           | If the possibility of an untraceable, space borne, hypersonic
           | weapon was on the table we might have had a better deterrent
           | than nuclear weapons. The lack of fallout and total
           | deniability makes it almost certain they would have been
           | deployed and quite concisely ended a few conflicts at the
           | onset.
           | 
           | It is alarmingly frightening, moreso even, because the impact
           | could be extremely precise- leaving infrastructure intact.
        
             | ajmurmann wrote:
             | > I'd consider how much more docile the nation's of the
             | world would have become sans nuke.
             | 
             | Interesting. I expected that nuklear weapons made us more
             | docile. It's a huge deterrence for big powers to go to war
             | with each other. I think we are seeing this play out in
             | Ukraine right now. If Russia had no nuclear weapons, I'd
             | expect NATO to have intervened much more directly at this
             | point, especially after seeing that Russia seems much
             | weaker than expected.
        
               | anonAndOn wrote:
               | That is precisely why Putin keeps saber rattling about
               | Russia's nukes. NATO (but mostly the US) would wipe out
               | the Russian forces in Ukraine in a matter of days. Since
               | he's committed so much of Russia's military to the
               | invasion, the west would effectively castrate Russian
               | defenses and likely all manner of hell would break loose
               | in all those oppressed satellite regimes (hello!
               | Chechenia, Georgia, Belarus, etc.)
        
               | zozbot234 wrote:
               | The normalization of saber-rattling about nukes is one of
               | the most unsettling outcomes of this whole conflict TBH,
               | and hopefully it's going to be addressed in some way down
               | the line. If every non-nuclear power is suddenly
               | vulnerable to conventional attacks by any rouge state
               | with nukes, the ensuing equilibrium is pretty clear and
               | is not good for overall stability.
        
               | mcguire wrote:
               | Nuclear saber rattling has been the norm for a very long
               | time; it's just that after the fall of the Soviet Union
               | there wasn't much need for it. Things have returned to
               | their more traditional state.
        
               | anonAndOn wrote:
               | Kim Jong Un would like you to hold his soju.
        
               | chasd00 wrote:
               | in the 80s it was not unusual for armed Russian strategic
               | bombers to cross into US airspace above Alaska and then
               | be escorted back out by US interceptors. I agree nuclear
               | saber rattling is unsettling but it can get much worse
               | than what we're seeing now.
               | 
               | /btw, in other discussions i've been too cavalier
               | throwing around the likelihood of nuclear weapon use in
               | Ukraine. I've thought about it much more since those
               | other threads
        
               | willcipriano wrote:
               | I don't feel like anything has really changed in regards
               | to nuclear saber rattling, Biden did so last year in
               | regards to US citizens[0] no less.
               | 
               | [0]https://townhall.com/tipsheet/katiepavlich/2021/06/23/
               | in-gun...
        
               | zozbot234 wrote:
               | Well yes, but that's just Biden missing the point
               | entirely as usual. The military is sworn to defend the
               | Constitution against all enemies foreign and domestic, so
               | if a mass insurgency is ever needed to counter some
               | future totalitarian government, much of the military will
               | be on _that_ same side. What Putin has been saying is a
               | whole lot more serious than that.
        
               | willcipriano wrote:
               | I think he more critically missed that using nuclear
               | weapons on yourself is a massive tactical blunder. Just
               | pointing out this isn't anything new.
        
               | p_j_w wrote:
               | >If every non-nuclear power is suddenly vulnerable to
               | conventional attacks by any rouge state with nukes
               | 
               | There's nothing sudden about it, this has been the
               | reality for decades now. We here in the US were on the
               | other side of the matter in Iraq and arguably Vietnam.
               | This is an old truth.
        
               | the_af wrote:
               | Some in the US even argued for using nuclear weapons on
               | Vietnam, out of frustration with the lack of progress
               | with conventional war.
               | 
               | Imagine how that would have gone -- dropping nukes on the
               | Vietnamese in order to "save" them from Communism.
               | 
               | Thankfully saner minds prevailed.
        
               | IntrepidWorm wrote:
               | Yup. Henry Kissinger was a big part of that nonsense,
               | along with a whole bunch of equally sinister stuff. The
               | cluster bombings of Vietnam, Cambodia, and Korea were in
               | many ways directly the result of his machinations.
        
             | nradov wrote:
             | Nah. There's no point in putting hypersonic cruise missiles
             | in space. Too expensive, and not survivable. Those weapons
             | will be launched from air, ground, and surface platforms.
             | Magazine depths will be so limited that they'll only be
             | used for the highest priority targets. They won't be enough
             | to end any major conflict by themselves.
        
               | derefr wrote:
               | > Magazine depths will be so limited that they'll only be
               | used for the highest priority targets. They won't be
               | enough to end any major conflict by themselves.
               | 
               | I'm probably being incredibly naive in saying this, but
               | what about "non-wartime" decapitation strikes -- where
               | _instead_ of going to war, you just lob some well-timed
               | hypersonic missiles at your enemy 's capitol building /
               | parliament / etc. while all key players are inside;
               | presumably not as a way to leave the enemy nation
               | leaderless, but rather to aid an insurgent faction that
               | favors you to take advantage of the chaos to grab power?
               | I.e., why doesn't the CIA bring ICBMs along to their
               | staged coups?
        
               | ISL wrote:
               | If you do this, the enemy's nuclear-weapons services will
               | look in the playbook under "what to do if someone kills
               | the government", see, "launch everything as a
               | counterattack", and press the button.
               | 
               | A key advantage of a hypersonic weapon is the possibility
               | of first-strikes to disable the enemy's retaliation
               | systems before they have the ability to launch more-
               | traditional retaliatory responses. Only submarines are
               | likely to be mostly-immune to them.
        
               | 323 wrote:
               | So why are the Chinese doing it?
               | 
               | https://www.heritage.org/asia/commentary/chinas-new-
               | weapon-j...
        
               | nradov wrote:
               | The Chinese weapon is ground launched, exactly as I
               | stated. Sure you can boost such a weapon up above most of
               | the atmosphere in order to get longer range, but the
               | downside is that higher altitude flight paths make it
               | easier to detect and counter.
        
               | sangnoir wrote:
               | AFAIK, there are no publicized counters to partially
               | orbital hypersonic glide weapons in their glide phase due
               | to their maneuverability and speed. Perhaps THAAD - but
               | it may be difficult to ascertain the target when a weapon
               | can glide halfway across the world
        
               | nradov wrote:
               | Well in _theory_ the RIM-174 (SM-6) has some limited
               | ability to intercept hypersonic glide weapons. Although
               | obviously that 's never been tested.
               | 
               | There are counters to hypersonic glide weapons beyond
               | shooting them down. If you can detect it early enough
               | then the target ship can change course and try to evade.
               | The sensors on those missiles have very limited field of
               | view so if it's not receiving a real time target track
               | data link for course correction then it can be possibly
               | be dodged (depending on how many are incoming, weather,
               | and other factors). Even if the target can't evade, a bit
               | of advance warning would at least allow for cueing EW
               | countermeasures.
        
               | upsidesinclude wrote:
               | Nah? You don't put cruise missiles in space. You put mass
               | in space.
               | 
               | I supposed the point would be to hit the highest priority
               | targets and nothing else. Loss of command and logistics
               | has a profound effect on endurance
        
               | nradov wrote:
               | Putting mass in space as a weapon is just a silly scifi
               | idea disconnected from reality. Even with modern reusable
               | rockets, launch costs are still extremely high,
               | especially if you need enough platforms to hit time
               | sensitive targets. And the platforms wouldn't be
               | survivable. There are cheaper, more effective ways to
               | fulfill the mission.
        
             | openasocket wrote:
             | I don't think you'd necessarily have deniability. We have
             | early warning radars and satellite networks now capable of
             | identifying an ICBM missile launch in the boost phase. Even
             | with only ground-based sensor detecting the missile in the
             | midcourse, it is a ballistic missile, which means the
             | missile follows a predictable trajectory. This can be used
             | to fairly precisely determine what it is aiming at, but
             | also could be used to trace the missile back to a launch
             | site.
        
               | upsidesinclude wrote:
               | Well from space, the point of origin is a bit arbitrary.
               | We could just wait for our satellite to reach enemy
               | territory.
               | 
               | Also, ICBMs in their present form would not likely
               | resemble anything deployed in space
        
           | MetaWhirledPeas wrote:
           | > So, because of this game-theoretic equilibrium, any use of
           | the stratosphere for ballistic weapons delivery is
           | effectively forbidden -- even though nobody's explicitly
           | asking for it to be.
           | 
           | Interesting! SpaceX was hoping to one day use Starship for
           | quick intercontinental flights. I wonder if this unspoken
           | rule would make that prohibitive?
        
             | misthop wrote:
             | Unlikely, as those flights would be scheduled and the
             | launch site publicized. It wouldn't absolutely preclude a
             | masked nuclear strike, but that would be possible already
             | with space launches.
        
               | brimble wrote:
               | It'd also be a pretty shitty first strike, since you'd be
               | limited to the count of starships scheduled to launch
               | (and likely only ones headed generally in the direction
               | of your target if you _really_ want to mask it) at about
               | the same time. So, probably just one or two, at best.
               | Meanwhile, you 'd need at least dozens (of missiles--more
               | warheads) to have any hope of substantially reducing a
               | major nuke-armed opponent's capability to retaliate.
               | 
               | Not remotely worth the complexity of setting up and
               | executing. _Maybe_ worth it against an opponent with
               | extremely limited launch capacity (North Korea?) but that
               | 's a pretty niche application.
        
             | merely-unlikely wrote:
             | Until we have some magical non-polluting rocket fuel, I
             | can't imagine intra-planetary rocket trips ever becoming
             | permissible. Planes are bad enough.
        
               | ucosty wrote:
               | Ignoring production, hydrolox would work, not that spacex
               | are going down that road
        
           | dirtyid wrote:
           | >game-theoretic equilibrium
           | 
           | Equilibriums change, every major US platform was at one point
           | designed to be nuclear capable, i.e. cruise missiles now
           | liberally launched from planes/bombers/ships that are all
           | nuclear capable. There's no a reason nuclear countries who
           | get attacked by any US platforms should assume any incoming
           | ordinance ISN'T nuclear, down to gravity bombs, except for
           | expectation - knowing US has overwhelming conventional
           | capabilities and would rather use it than nukes.
           | 
           | Same will apply as conventional ICBM matures - we haven't
           | seen much of it because ICBMs have not been sufficiently
           | accurate unless carrying nukes where CEP in meters don't
           | matter. For countries with power projection, it was
           | dramatically cheaper to get closer first and deliver less
           | expensive ordinance. Conventional ICBMs seem effectively
           | forbidden because most actors assume they're too inaccurate
           | for anything but nukes and too expensive for anything but
           | nukes.
           | 
           | But that's changing - there are hints that PRC is pursuing
           | rapid global strike i.e. US prompt global strike, because IMO
           | it's the great equalizer in terms of conventional mutually
           | assured destruction precisely because it isn't banned. A lot
           | of articles being seeded on SCMP about PRC hypersonic
           | developments that spells out meter level CEP ICBMs designed
           | to conventionally attack strategic target of depth, aka
           | Prompt Global Strike.
           | 
           | Ergo (IMO) PRC maintaining no first use nuclear policy while
           | conducting massive nuclear build up to setup credible MAD
           | deterrence. This sets up the game theory of accepting that
           | conventional ICBM attacks on homeland from across the globe
           | is possible and that it's best to wait for confirmation
           | unless one wants to trigger nuclear MAD. Entire reason US /
           | USSR and countries that could moved to Nuke triad or
           | survivable nuke subs was because it bought more time than
           | hair trigger / launch on warning posture.
           | 
           | This makes a lot of sense for PRC who doesn't have the
           | carriers, strategic bombers or basing to hit CONUS (or much
           | outside of 2nd island chain). It makes a lot of sense for any
           | nation with enough resources for a ICBM rocket force but not
           | enough for global force projection (basically everyone).
           | World will be very different place if such capabilities
           | proliferate. Imagine any medium size country with ability to
           | hit stationary targets worldwide - fabs, server farms, power
           | stations, carriers being repaired in a drydock.
        
             | salawat wrote:
             | Hypersonics are a boogieman imo. You'll get one volley
             | before everyone starts rolling out flak or other anti-
             | warhead defenses, and hypersonics have a gigantic weakness
             | in not being able to maneuver for beans. Once you're going
             | over a mile per sec -> predicting where you'll be to fill
             | it with crap to destroy you isn't that hard.
             | 
             | Who cares if you can blow up one target once? Unless you
             | marshal enough to wipeout enough infrastructure to really
             | cripple your opponent, it won't do you much good anyway;
             | and if you do cripple them, and they're nuclear,
             | congratulations; you just won a nuclear response. You now
             | have bigger problems.
        
           | malaya_zemlya wrote:
           | take a look at
           | https://en.m.wikipedia.org/wiki/Prompt_Global_Strike
        
           | Robotbeat wrote:
           | I'm not sure there's much difference between the Russians
           | using air-launched cruise missiles (with ranges of hundreds
           | to potentially thousands of kilometers and almost always
           | capable of carrying a nuclear warhead) launched from their
           | Tu-95 Bear strategic bombers (equivalent to the B-52), which
           | Russia has done several times now in Ukraine.
        
             | gambiting wrote:
             | I think the difference is that air launched missiles could
             | carry nuclear payloads but usually don't, while ICMBs could
             | carry non-nuclear payloads but usually don't. All kinds of
             | countries have been using air launched missiles all the
             | time which at least on average tells us that every time one
             | is fired it won't(shouldn't) have a nuclear payload. ICMBs
             | on the other hand have never been used against anyone, and
             | their stated goal for existence is carrying nuclear
             | payloads - so if you see one coming your way you can assume
             | it's a nuke, even though technically it doesn't have to be.
        
           | stickfigure wrote:
           | > despite these being a fairly-obvious "dominant strategy"
           | for warfare
           | 
           | I don't think these are quite as viable as you think. ICBMs
           | are _expensive_. Probably tens of millions of dollars each,
           | for a single-use item. Cruise missiles cost $1-$2 million to
           | deliver the same payload and have a better chance of
           | surprising the enemy.
           | 
           | ICBMs have longer range, but how often do you need to strike
           | targets more than 1000km past the front line? They're
           | inherently strategic weapons.
        
             | KineticLensman wrote:
             | > ICBMs are expensive. Probably tens of millions of dollars
             | each
             | 
             | For sub-launched ICBMs (like the UK's nuclear deterrent)
             | you also need to factor in the through-life costs of the
             | launcher platform, and the fact that once it starts
             | launching, it has given itself away. We only have four
             | subs, not all of which are on patrol, so it would be
             | barking to compromise these to deliver a conventional
             | payload.
        
               | misthop wrote:
               | Which country is "we"? The US has 14 Ohio class SSBNs in
               | service until at least 2029
        
               | therealcamino wrote:
               | Probably the UK.
               | 
               | https://www.royalnavy.mod.uk/news-and-latest-
               | activity/operat...
        
               | KineticLensman wrote:
               | Yes, the UK (I should have made this more explicit).
               | 
               | The UK's current deterrent force is currently expected to
               | be replaced by the successor Dreadnought class [0] in the
               | 2030s. They are currently projected to cost PS31 billion
               | (likely an underestimate) for four subs, each of which
               | can carry 8 missiles max. Again, these are a horribly
               | expensive way to deliver conventional explosives when we
               | have cruise missiles instead.
               | 
               | [0] https://en.wikipedia.org/wiki/Dreadnought-
               | class_submarine
        
             | rmah wrote:
             | Even cruise missiles are horribly expensive for
             | conventional munitions payloads. Cruise missiles were
             | developed to deliver nukes.
             | 
             | An unguided 1000kg "dumb" bomb costs $2,000. A "smart bomb"
             | costs $20,000 to $100,000. A cruise missile costs $1mil to
             | $2mil.
             | 
             | In the scope of any protracted real war, sending out lots
             | of cruise missiles is _horribly_ inefficient. Much much
             | cheaper to send out a few planes to drop 100 's of tons of
             | dumb or smart bombs. IOW, you can deliver 10x to 100x more
             | boom if you just use planes and bombs. 1000x more if you
             | use long range artillery. But then, the pilots or soldiers
             | are at risk-- and that is a political calculation.
        
               | chasd00 wrote:
               | about cruise missiles, wasn't there one of those DARPA
               | contests to see if a guy in the garage could produce a
               | cruise missile? IIRC it got quite scary and was cancelled
               | or something. Being in the drone and high power rocketry
               | hobby i have absolutely no doubt there's enough knowledge
               | and electronics availability for a guy in their garage to
               | come up with something that delivers 50lbs to a precise
               | gps coordinate a few hundred miles away for less than
               | $10k. Once you do that, it's easy to scale up to 500lbs
        
               | HALtheWise wrote:
               | There was a man in New Zealand trying to make a very low-
               | cost DIY cruise missile as a hobby project [0]. iirc, he
               | was using a pulsejet engine, but ended up getting shut
               | down by some visits from stern-looking government agents.
               | 
               | 0: https://www.theguardian.com/world/2003/jun/04/terroris
               | m.davi...
        
               | openasocket wrote:
               | I'm skeptical you could hit that sort of range for
               | anywhere near $10K. Forget the electronics, you need an
               | engine powerful enough to lift a few hundred pounds for
               | that distance. Unless you want it detected immediately by
               | early warning radar you need it to fly at a low altitude,
               | like a hundred feet or less. Unless you want it to take
               | forever and be susceptible to infantry with small arms it
               | needs to be traveling fast, in the hundreds of miles an
               | hour. That's simply not possible with an electric system
               | with today's technology, and a rocket engine won't
               | provide the endurance or efficiency you need. That leaves
               | a jet engine or pistol engine. Plus, flying at that speed
               | and altitude means you need an effective autopilot system
               | that uses terrain-following radar. You'll also need some
               | nice guidance packages that allow the shooter to set
               | multiple waypoints, so the missile doesn't have to just
               | fly a direct course. And a 50 lb payload of high
               | explosive just isn't that helpful. There aren't a ton of
               | targets where you only need 50lb of explosives to defeat
               | them, that are also going to stay in the same exact GPS
               | position long enough for your missile to travel a few
               | hundred miles. So you'll want a different terminal
               | guidance method, either some sort of radar sensor or
               | infrared.
               | 
               | I don't think you could get an engine capable of getting
               | you hundreds of miles at that speed and altitude, much
               | less the sensors and guidance system.
        
               | stavros wrote:
               | Gliders though.
        
             | tshaddox wrote:
             | > ICBMs have longer range, but how often do you need to
             | strike targets more than 1000km past the front line?
             | 
             | Don't you need to also consider the vast expense countries
             | (okay, mostly just the United States) spend to essentially
             | extend their "front line" well beyond their own borders?
        
               | rainsil wrote:
               | Well the first year of the Iraq War cost the US $54
               | billion, according to congress's budget[0]. This doesn't
               | include the total cost of the supporting infrastructure
               | need to be able to deploy troops in Iraq quickly, but we
               | can estimate that using the increase in defence budget
               | from 2002-3, or $94 billion ($132B in 2020)[1].
               | 
               | According to Wikipedia, Minuteman III ICBMs have a 2020
               | unit cost of $20 million[2], so for the cost of an Iraq
               | invasion, the US could have fired about 6600 missiles.
               | Considering the invasion toppled the Iraqi government,
               | it's pretty unlikely that firing 6600 missiles with
               | conventional payloads would have been anywhere near as
               | effective.
               | 
               | [0]: https://en.wikipedia.org/wiki/Financial_cost_of_the_
               | Iraq_War...
               | 
               | [1]: https://en.wikipedia.org/wiki/Military_budget_of_the
               | _United_...
               | 
               | [2]: https://en.wikipedia.org/wiki/LGM-30_Minuteman#Count
               | erforce
        
               | tshaddox wrote:
               | The comparison we're making is whether _precision
               | attacks_ , presumably on roughly building-sized targets,
               | would be cheaper to do from long range via ICBMs (with
               | conventional warheads), or via much cheaper but shorter-
               | range missiles. My guess is that _neither_ ICBMs nor
               | shorter-range missiles could have accomplished what the
               | U.S. military accomplished in Iraq. Presumably missiles
               | alone were responsible for a small portion of that $54
               | billion.
        
           | nradov wrote:
           | Shorter range ballistic missiles have been heavily used in
           | multiple conflicts around the world for many years. The US
           | military has been researching the possibility of using
           | conventionally armed long range ballistic missiles to fulfill
           | the prompt global strike mission. Potential target countries
           | have no anti-nuke defenses. But there is a risk that Russia
           | or China could misinterpret a launch as aimed at them.
        
           | beaconstudios wrote:
           | in WW2, Germany used V2 missiles for indiscriminate bombing
           | of cities (primarily London). I can imagine it would look
           | like that, but worse - and having gone to a few museums that
           | showed Blitzkrieg London, that was bad enough as it is.
        
             | dekhn wrote:
             | blitzkrieg is something else. You're referring to the
             | London Blitz. https://en.wikipedia.org/wiki/The_Blitz was a
             | bombing campaign (airplanes fly over and drop bombs on
             | cities, a very WWII thing to do). V-1 and V-2 sort of came
             | "after" when rocket and guidance tech developed enough that
             | it was practical to target cities using missles from
             | hundreds of miles away (northern france, I think).
        
               | beaconstudios wrote:
               | Yes you're right, I meant the blitz, and regular bombing
               | did indeed come first. It's been quite a while since I
               | learned about ww2 history!
        
             | the_af wrote:
             | Agreed about the nastiness of V2 attacks.
             | 
             | However, the existence of nuclear weapons _today_ doesn 't
             | seem to have prevented indiscriminate bombing (using
             | whatever weapons: dumb bombs, unguided rockets, cruise
             | missiles) of targets (including cities) in several
             | countries in recent years.
        
           | chasd00 wrote:
           | i would imagine delivering a conventional warhead with an
           | ICBM has a very high risk of being mistaken for a nuclear
           | armed ICBM. Also, they're expensive. Putting a JDAM package
           | on an old iron bomb turning it into the most advanced
           | precision guided munition is very cost effective.
           | 
           | https://en.wikipedia.org/wiki/Joint_Direct_Attack_Munition
        
           | importantbrian wrote:
           | To tie this in with current events this is exactly what makes
           | the no-fly zone idea in Ukraine so dangerous. All of the
           | things you have to do to establish a no-fly zone and take
           | away the enemy's ability to fire into and effect your no-fly
           | zone look the same as a prelude to an invasion or nuclear
           | first strike. This is made worse by the fact that many of the
           | weapons systems you would be using are dual use. Meaning they
           | were designed to deliver conventional or nuclear weapons.
           | It's a massive gamble that the actions won't be
           | misinterpreted or used to justify moving up the escalation
           | ladder.
        
         | JasonFruit wrote:
         | Chemical and biological weapons are very useful in warfare as a
         | way to demonize one combatant. False or doubtful claims of
         | chemical weapons deployments have an effect on the response of
         | the public and international organizations that is entirely out
         | of scale with the damage that could be inflicted.
        
           | danuker wrote:
           | Relevant if you want real life cases of military
           | impersonation: https://en.wikipedia.org/wiki/False_flag
        
             | groby_b wrote:
             | In general, a wikipedia link with no additional comment
             | does little to advance a discussion in any direction.
             | 
             | It might be relevant, but it's an extremely low-value
             | comment. A good chunk of the people reading the comment
             | (and caring about it) will already know it's describing
             | false-flag operations.
             | 
             | A good way to think about good HN comments is "is there a
             | specific point I'm trying to make". Anything that doesn't
             | try to articulate a point is likely to be downvoted.
        
               | danuker wrote:
               | Thank you.
        
         | bsedlm wrote:
         | that we don't see it doesn't mean it is not happening.
        
       | alpineidyll3 wrote:
       | Hype over crap like this grinds my gears. Organophosphines like
       | VX are ALL toxic. There's about a zillion such toxic molecules
       | all containing the same functional group. This study does not
       | demonstrate that this tool is better generator of toxic molecules
       | than anything that includes the basic rules of valence and
       | rudimentary understanding of shape similarity.
       | 
       | When thinking about whether ML does something novel, we must
       | always compare with some simple alternative. I would be impressed
       | if it'd predicted something like Palytoxin, a highly specific
       | molecule with extraordinary toxic activity. There's no way the
       | tools of this paper would though.
       | 
       | -- director of ML at a drug company.
        
       | ck2 wrote:
       | Why make a chemical weapon when you can just tweak a virus which
       | self-replicates?
       | 
       | BA.2 is even more infectious than BA.1 which is saying something,
       | imagine an engineered BA.3 with even more spread and then make it
       | as even more deadly. You might even be able to target it to one
       | race or region if there is a gene specific to that area.
       | 
       | Always hoped the future would be Star-Trek-like but it seems all
       | it takes is one dictator or terrorist to end the world, slowly at
       | first but then it would double every other day and impossible to
       | stop.
        
         | danuker wrote:
         | If you make it too deadly, maybe it doesn't spread as far
         | (because the hosts die). Make it just the right amount of
         | deadly!
        
       | sjdegraeve wrote:
       | .
        
         | [deleted]
        
         | ______-_-______ wrote:
         | You might be looking for this thread
         | https://news.ycombinator.com/item?id=30699673
        
       | jleyank wrote:
       | Chemical weapons above a certain level, bio weapons and nucs are
       | all seen as weapons of mass destruction and are not really that
       | useful tactically. Introducing strategic-destabilizing elements
       | to a conflict greatly increases its unpredictability and probably
       | is a health risk for the leaders involved.
        
       | verisimi wrote:
       | Wow - I picked up on this earlier today, and even quoted the same
       | as in this article. I was amazed that the scientists had not
       | considered that AI could be/is being used for harm. (Was
       | downvoted for this, but whatevs.)
       | 
       | It struck me as incredibly naive, but then - what would someone
       | else do in their situation? Most of us work in silos without
       | awareness of how our work is used, and I suspect we are often
       | causing (unintentional) harm to others whether we are scientists,
       | programmers, in finance, in health, in government, etc. If we
       | realise our predicament, there isn't an moral authority to make
       | things right. There is only the legislation that was been written
       | by lobbyists paid by the corporations we work for.
       | 
       | Putting the article in broader context, perhaps it is about the
       | creation of a moral framework for AI intended to pacify our
       | disgust at the system we find. I expect that we will be expected
       | to look away as AI "ethics" committees justify the unjustifiable,
       | but call it ethical. As whatever-it-is is found to be ethical
       | after all by ethical authorities, most of us we will wave this
       | through and consider that we have acted judiciously. IMO.
        
       | teekert wrote:
       | Interesting exercise, perhaps the harmful molecule generating AI
       | still generates helpful molecules because molecules harmful at a
       | certain dose may sometimes be very beneficial in a (much) lower
       | dose. And the other way around of course.
       | 
       | Perhaps we should simply have one "biologically active molecule"
       | generating network. The dose will ultimately determine the
       | toxicity.
        
         | slivanes wrote:
         | I couldn't help but think of homeopathy with the above
         | sentence.
        
           | teekert wrote:
           | Some snake venoms will stop your heart... but at a lower dose
           | they will simply ease the heart and lower your blood
           | pressure. For some examples: [0]
           | 
           | [0]: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6832721/
        
         | l33t2328 wrote:
         | > And the other way around of course.
         | 
         | Whaaat? Are you saying there exist molecules which are very
         | harmful in a (much) lower dose, but are beneficial at a higher
         | dose?
         | 
         | Do you have any examples?
        
           | teekert wrote:
           | So, as I said my remark didn't come out right, but, some
           | molecules may be considered harmful at low dose and harmless
           | at high dose if they stabilize a deteriorating conduction at
           | or over some threshold concentration. Yeah I know it's a
           | fetch but you got me thinking... It's not that clear cut.
           | 
           | I mean the urine of someone on chemotherapy is pretty toxic,
           | still we consider the molecules beneficial to the patient
           | overall (the patient-tumor system if you will, not the
           | patient by themselves).
        
           | teekert wrote:
           | I am saying that the network that comes up with "good"
           | molecules will produce molecules that are very harmful as
           | well, presumably at higher doses.
           | 
           | I mean take some beta blockers (helpful molecules) at 100x
           | normale dose: pretty harmful.
           | 
           | Edit: Yeah my original comment didn't come out right, I
           | agree.
        
       | empiricus wrote:
       | Does this fall into the category of research "try not to make
       | public"? Or is this category only wishful thinking on my part.
        
       | busyant wrote:
       | I'm sure I'm not the first person to consider this, but ...
       | 
       | RNA molecules can often be "evolved" in vitro to bind/inhibit
       | target molecules with high specificity (e.g.,
       | https://en.wikipedia.org/wiki/Systematic_evolution_of_ligand...)
       | 
       | I imagine it would not be difficult to create RNAs that inhibit
       | some essential human enzyme and then use the RNAs for targeted
       | assassination.
       | 
       | I mean, if you're doing an autopsy, you might run standard drug
       | tests for poisons, but who's gonna screen for a highly specific
       | RNA?
        
         | foobarbecue wrote:
         | Have you seen the latest Bond movie?
        
           | busyant wrote:
           | No. Is that part of the plot?
           | 
           | edit: just read the wiki for the latest Bond movie.
           | Apparently, there is nothing new under the Sun.
           | 
           | Thank you.
        
             | Computeiful wrote:
        
         | [deleted]
        
       | cryptonector wrote:
       | The use of nuclear weapons would be... obvious: if an explosion
       | in the 10Ktn or bigger happens, it's a nuclear weapon. There
       | aren't enough nuclear powers to make the use of nuclear weapons
       | plausibly deniable.
       | 
       | The use of chemical weapons might not be as obvious if they are
       | slow acting. And the production of chemical weapons is much
       | easier than that of nuclear weapons. Though, the dispersion of
       | chemical weapons is non-trivial.
       | 
       | The use of biological weapons need not be obvious at all -- "it's
       | a naturally-evolved pathogen, this happens!". The development and
       | production of biological weapons is much easier than that of
       | nuclear weapons. Human and animal bodies can be made to help
       | spread biological weapons, so their dispersion can be trivial.
       | The only thing that a bioweapons user might need ahead of time is
       | treatment / vaccines, unless the bioweapon is weak and the real
       | weapon is psychological.
       | 
       | Sobering thoughts.
        
       | tgtweak wrote:
       | "Now, keep in mind that we can't deliberately design our way to
       | drugs so easily, so we won't be able to design horrible compounds
       | in one shot, either. "
       | 
       | I would discount this, heavily and concerningly, as a false sense
       | of security. The reality is that prohibitive factors in creating
       | new drugs from compounds discovered similarly (by AI or other
       | automated process) is almost entirely due to testing safety
       | procedures and regulations... If the bad actors are trying to
       | find the most lethal compound with no such oversight - and
       | chances are very high that they aren't bound by any such
       | regulation if they're state-level labs operating under impunity -
       | there is nothing but the synthesis that would make the
       | formulation and testing of these as impractical as the author
       | claims. Take away the years-long, heavily scrutinized and
       | regulated multi-stage billion-dollar path to drug approvals and
       | you'll find that barrier is not so high.
       | 
       | I would like to think this data could be helpful to any
       | organizations looking to proactively develop detectors or
       | antidotes for such compounds - especially if the threat was
       | previously unknown to them.
       | 
       | Let's say an entirely novel class of toxin was found in a cluster
       | of these predictions that has no existing references in private
       | or public records - it could be that another organization has
       | discovered and synthesized something similar through one of many
       | other paths.
       | 
       | Many lines are drawn between this type of approach and that of
       | whitehat hackers. You must necessarily create the vulnerability
       | to mitigate it. It feels like "white hat" biolabs claiming the
       | same are operating on the same conundrum and that the difference
       | between "studying for the sake of mitigating" and "creating a
       | weapon" are fundamentally indistinguishable without an absolute
       | knowledge of intent - such is impossible from the outside.
        
         | jcranmer wrote:
         | > The reality is that prohibitive factors in creating new drugs
         | from compounds discovered similarly (by AI or other automated
         | process) is almost entirely due to testing safety procedures
         | and regulations
         | 
         | Most drug candidates fail because _they don 't work_, not
         | because of any regulatory procedure. About 50% of drug
         | candidates that enter Phase III trials--the final clinical
         | trial before approval--fail, and that's almost always because
         | they failed to meet clinical endpoints (i.e., they don't do
         | what they're supposed to do), and not because they're not safe
         | (toxicity is Phase I trials).
        
           | netizen-936824 wrote:
           | That "not working" part has some nuance to it as well. How
           | well do we predict ADME? Is there binding with some off
           | target protein that makes it terrible? Maybe it just doesn't
           | bind to the desired target at all.
           | 
           | Toxins don't have those constraints, its not even about
           | regulation. Making something that's safe is way harder than
           | making something that is not safe, purely because of the
           | complexity involved in making the thing safe.
        
       | taurusnoises wrote:
       | Anyone wanna ELI5? It's useful for bother explainer and receiver.
       | ;)
        
         | xondono wrote:
         | They had an AI that looked for safe drugs by minimizing an
         | estimate of lethality, changed it to 'maximize' and the
         | computer spewed known nerve gas agents.
        
         | danuker wrote:
         | Before, computers were used to make less poisonous chemicals.
         | 
         | Now, the people asking computers to do that realized they can
         | ask the computers to make more poisonous chemicals.
        
         | Traubenfuchs wrote:
         | Software is able to simulate the effect of chemical compounds /
         | molecules on the human body. This can be used to find drugs
         | that do specific things, or stronger versions of existing
         | drugs. For example, you could look for very strong but very
         | short acting sleeping pills that immediately make you fall
         | asleep, but cause zero grogginess the next day. Or you could
         | optimize antibiotics to have a high half life, so you only have
         | to take them once, instead of 3 times a day for a week, which
         | you can easily forget.
         | 
         | Now think about nerve gas. We have discovered lots of different
         | nerve gas agents and know pretty well how much of each type you
         | need to kill a human. Said software can be used to find new
         | versions of nerve gas that kill with even lesser
         | concentrations. You could also optimize for other variables:
         | Nerve gas that remains on surfaces and doesn't decay by itself
         | for example.
        
       | ChrisMarshallNY wrote:
       | We already (sort of) do this. AI/ML is probably used for
       | simulating nuclear explosions, and is [arguably] even more useful
       | and accurate than actually setting off a bomb, and measuring it.
       | 
       | It makes sense that it could be weaponized. When Skynet becomes
       | self-aware, it would probably just design a chemical that kills
       | us all, and would aerosol that into the atmosphere. No need for
       | terminators, just big pesticide cans.
        
         | wanda wrote:
         | I'm quite sure we have already invented several chemicals that
         | match your description -- like sarin gas, invented in 1938 by
         | someone who, indeed, wanted to create a decent pesticide. A
         | lethal dose of sarin gas is something like 28-35m3/min over 2
         | minutes exposure, according to wikipedia. [0]
         | 
         | Hitler was well aware of its creation, and I believe quite a
         | lot of the stuff was produced for the purpose of warfare. There
         | were several in the Nazi military who wanted to use it, but
         | Hitler declined.
         | 
         | That seems rather odd, given his indifference to exterminating
         | people with gas on an industrial scale beyond the theater of
         | war. It has been suggested that Hitler was probably aware that
         | to use sarin gas would be to invite the allies to do so in
         | response, which would result in a dramatic loss of life on the
         | German side due to the sheer lethality of such chemical
         | weapons. [1]
         | 
         | Perhaps he thought it easier to stick to conventional warfare,
         | in which the pace is more manageable than with WMDs, where you
         | would start going down the road of mutually assured destruction
         | but without the strategic framework in place to prevent anyone
         | from actually wiping out a population before realising how bad
         | an idea it would be.
         | 
         | And I think this reluctance to change the game, this seemingly
         | deliberate moderation, perhaps best demonstrates the true
         | difference between the machine and human in warfare.
         | 
         | It is not a difference in innovation -- we have always been
         | very good at inventing highly optimised ways to end life.
         | 
         | The difference is that a machine intelligence will not
         | hesitate. It will not ask for confirmation, pause or break a
         | sweat. It will pull the trigger first, it will point the bombs
         | at anything that is an adversary and anything that could
         | theoretically be _or become_ an adversary, and it will not
         | miss. And it will not have to face ethical criticism and
         | historical condemnation afterwards. [2]
         | 
         | [0]: https://en.m.wikipedia.org/wiki/Sarin
         | 
         | [1]:
         | https://www.washingtonpost.com/news/retropolis/wp/2017/04/11...
         | 
         | [2]: Assuming this is a Skynet-like machine intelligence, which
         | doesn't really have the capacity for remorse or negotiation and
         | seems primarily, indeed solely occupied with the task of ending
         | human life.
         | 
         | Obviously, a true AI that is essentially a conscious mind
         | equivalent to our own minds, may experience the same hesitancy
         | that most of us would, were our fingers to be over the buzzer.
         | 
         | Unless the AI independently arrives at a different set of
         | values to us, like the Borg or something.
        
           | VLM wrote:
           | Its the classic logistics problem. Scaling ratio of weight vs
           | volume or something like that. Just like nukes, if you heat
           | an enemy soldier to 100M degrees he isn't any more dead than
           | heating him to 10M degrees and volumes expand very slowly
           | with mass so making bigger and bigger bombs is a fools-
           | errand.
           | 
           | Same problem with chem weapons. You hit a tank brigade with
           | 1000x lethal dose they aren't any deader than if you hit them
           | with 1x dose. But if the bomb misses which is likely, all
           | you've done is REALLY piss them off. Nerve gas in an empty
           | wheat field just kills a bunch of corn bugs but it really
           | pisses people off. If you target their tank brigade and miss,
           | they'll target your home town, as we did to them with
           | conventional bomb even without having been nerve gassed to
           | start with. If you target their home town then the brigade
           | you missed is going to be unhurt and really angry. Its the
           | kind of weapon thats pretty useless unless you have infinite
           | supply and infinite logistics. Like cold war USA or cold war
           | Russia.
           | 
           | The allies had better logistics than the Germans so they knew
           | the second time around in WWII that trying to go chem is just
           | going to end up in the German's getting more chem'd than the
           | allies.
           | 
           | Another issue is WWI and previous its all about siege warfare
           | and breaking sieges where WMD is awesome and useful, whereas
           | WWII and newer is all about maneuver warfare and blitzkrieg
           | and all of Germany's plans and all of their early success
           | were based on the idea that anything in range of shells or
           | aircraft today is going to be occupied rear supply area next
           | week at the latest, so destroying it would be pretty dumb
           | because we need that area to be the rear of the battle space
           | next week. For a modern comparison the USA could have nuked
           | the green zone in Iraq and there's absolutely nothing anyone
           | could have done about it, but 'we' knew we'd be occupying the
           | green zone and needing something like the green zone, and the
           | green zone is sitting there for the taking, so in an
           | incredibly short term perspective it would have saved troops
           | and saved time and saved effort to just nuke it instead of
           | taking it the old fashioned way, but in medium and longer
           | term it would be counterproductive to war efforts to use WMDs
           | against the green zone, so we didn't.
        
           | iosono88 wrote:
        
           | ChrisMarshallNY wrote:
           | Hitler probably also experienced gas (not sure his generals
           | did, though). People forget that he was actually a decorated
           | NCO, from WWI (which had a lot to do with his terrible
           | attitude, later in life).
           | 
           | It was fairly worthless, militarily. High risk, big mess, no
           | real tactical advantage, and it just pissed everyone off. Its
           | only real efficacy would have been for bombing civilian
           | targets, and I don't think they had the delivery mechanisms.
        
           | brimble wrote:
           | Chemical weapons are expensive. Consider the logistics and
           | training required to effectively deploy them, plus any
           | specialized equipment. Meanwhile, they're only useful as long
           | as your opponent doesn't know you're planning to use chemical
           | weapons, since countermeasures are relatively cheap and every
           | major military knew what to do about them by the time WWII
           | broke out. As soon as your enemy knows to beware chemical
           | attacks, all you're doing is annoying them while making it
           | hard for your own troops to advance (they have to put on
           | chemical suits/masks themselves, or else wait for the gas to
           | disperse). Very hard to use effectively in maneuver warfare.
           | They didn't even prove very effective in WWI, which was much
           | closer to an ideal environment for their use.
        
         | nradov wrote:
         | I don't think AI/ML is really used for simulating nuclear
         | explosions. There's not much point, better techniques exist.
        
           | hackernewds wrote:
           | What such better techniques exist?
        
             | nonameiguess wrote:
             | Knowledge of actual physics. Explosions can "easily" be
             | simulated from first principles. Easily in scare quotes
             | because it takes quite a bit of computing power. This was
             | actually my wife's first job back in 2003, simulating
             | missile strikes for the Naval Research Lab. A thorough
             | simulation took a few days back then, but given that was
             | almost 20 years ago, I'm sure it's a lot faster now.
             | 
             | In contrast, think of what you'd need to do this via
             | machine learning. You'd need to gather data from actual
             | missile strikes first and learn approximation functions
             | from that. While it's certainly doable, this is inherently
             | less accurate, thanks to both approximation error and
             | measurement error. It's not like pixels -> cat where the
             | true function isn't known.
        
       | Barrera wrote:
       | If this were really a practical concern, machine learning would
       | be designing drugs that fly through the clinic today. They aren't
       | and so this paper, though click-grabbing, is probably of no
       | practical consequence.
       | 
       | One reason is lack of data. Chemical data sets are extremely
       | difficult to collect and as such tend to be siloed on creation.
       | Synthesis of the target compounds and testing using uniform,
       | validated protocols are non-trivial activities. They can only be
       | undertaken by deep pockets. Those deep pockets are interested in
       | return on investment. So, into the silo it goes. This might not
       | always be the case, though.
       | 
       | For now, the paper does raise the question of the goals and
       | ethics around machine learning research. But unintended and/or
       | malevolent consequences of new discoveries have been a problem
       | for a long time. Just ask Shelley.
        
         | philipkglass wrote:
         | A successful drug candidate must be useful in the treatment of
         | human medical problems and not have harmful side effects that
         | outweigh its benefits. A weaponized poison may have any number
         | of harmful effects without diminishing its utility. A compound
         | with really indiscriminate biochemical effects, like
         | fluoroethyl fluoroacetate, makes a potent poison without any
         | specific tuning for humans. It's much easier to discover
         | compounds that genuinely harm people than those that genuinely
         | help them.
        
       | MayeulC wrote:
       | Providing chemical plants with models to estimate lethality of
       | orders could be a great use case for this work.
        
       | photochemsyn wrote:
       | This is essentially what the pesticide and herbicide industries
       | have been doing since their inception, i.e. designing molecules
       | that efficiently kill animals, insects and plants. It seemed like
       | a miracle at first, but the long-term consequences of things like
       | persistent chlorinated aromatics and their derivatives (Agent
       | Orange and dioxin for example) eventually appeared in human
       | populations.
       | 
       | The development of the toxic nerve agents (organo-phosphate
       | compouds mostly) in particular was a side effect of research into
       | insect toxins. The nerve agents were discovered in this manner,
       | they worked too well. Nevertheless, these pesticides were deemed
       | safer than the organochlorines because they degraded fairly
       | rapidly after application (although they are implicated in nerve
       | damage related diseases like Parkinson's in agricultural areas).
       | 
       | Insect infestations are indeed a big issue in agriculture and can
       | wipe out entire crops if not dealt with, but there are plenty of
       | options that don't require applications of highly toxic or
       | persistent chemicals.
       | 
       | Otherwise, this is just another of the many issues modern
       | technology has created. Smallpox is another one - in the late
       | 1990s, there was a great debate over whether to destroy the last
       | smallpox samples - and then in the mid 2000's, someone
       | demonstrated you could recreate smallpox by ordering the
       | appropriate DNA sequences online and assembling them in a host
       | cell. Then there's the past ten years of CRISPR and gain-of-
       | function research with pathogenic viruses, a very contentious
       | topic indeed, and still unresolved.
        
       | cryptofistMonk wrote:
       | This is not really that worrying IMO - we already have weaponized
       | toxins, viruses, and enough explosives to blow up the entire
       | planet. So what if an AI can come up with something a little bit
       | worse? It isn't the existence of these things that's stopping us
       | all from killing each other.
        
       | hengheng wrote:
       | So, their tool will draw molecules that are good at doing harm,
       | and that is it? No word on stabilization (which makes it safe to
       | handle), synthesis, purification and such. I'd wager that most of
       | these substances have at some point been on somebody's
       | blackboard, but deemed impractical or infeasible, and then not
       | pursued, and that's why we don't know them by name today.
       | 
       | Still a scary lesson though.
        
       | adultSwim wrote:
       | I'm most worried about state actors.
        
       ___________________________________________________________________
       (page generated 2022-03-16 23:00 UTC)