I promised to walk someone through our Truth&Trust Online #TTOCon (thanks @TTOConference!) poster on “left-of-boom misinfosec” yesterday, but we missed the slot. I hate to disappoint, so here’s your online version…First, the people. It’s taken a lot of skills and specialisations to create misinfosec. @grayspective and myself are listed on this poster, but this is work from @Ngree_H0bit@TheLoki47 and community across @credcoalition@misinfosec and beyond.

Misinfosec hypotheses: infosec = physical, cyber + cognitive security; infosec principles & tools work on misinformation; cognitive security can be part of existing infosec defences (ISAOs, CyberInterpol);
3Vs: has to work at scale, at speed, and adaptively across many platformsThe structure and propagation patterns of misinformation incidents have many similarities to those seen in information security. @credcoalition MisinfoSec Working Group analysed these similarities and adapted information security standards (e.g. ATT&CK) to create AMITT frameworkAMITT (Adversarial Misinformation and Influence Tactics and Techniques) includes the left-of-boom misinformation activities that are often missed by other analyses, where ”left of boom” covers activity before an incident is widely visible to the public. (purple in the diagram)

We open-sourced the AMITT misinfosec framework. You can find it, and the white papers we wrote on its creation, at
misinfosecproject/amitt_frameworkAMITT (Adversarial Misinformation and Influence Tactics and Techniques) framework for describing disinformation incidents. AMITT is part of misinfosec – work on adapting information security practice…https://github.com/misinfosecproject/amitt_framework– we’ll keep putting new work there tooTerminology.
* Campaign: longer, sustained attack (eg 2016 US elections)
* incident: shorter-duration attacks (eg Pizzagate), can be part of campaigns
* Narrative: mechanism to interpret why individuals/ groups choose to act in specific context
* Artifact: Image, text, site etcMisinformation pyramid. An attacker sees all of it; a defender works upwards from artifacts, maybe has some intel at campaign level. Most of the battle is at narrative level. Most visibility is at artifact level. Most analysis and defence planning should be at incident.

AMITT is an “influence chain” model (based on the Cyber Kill Chain/ ATT&CK): its columns are a list of steps required to successfully conduct an attack, where any “link” broken results in an attack failure (we’re still working on what to do with Microtargeting and Go Physical)

Terminology: TTPs
* Tactic: stage of a misinformation incident (blue boxes)
* (Task: thing that needs to be done during a stage)
* Technique: activity within a stage (grey boxes)
* Procedure: incident described as a group of techniques
We can disrupt any of thesementions’s notes: Attackers have advantages across the 4 big steps before misinformation is visible to the public (these are the 4 left-of-boom steps in the Cyber Killchain), which is when most analysis and defence starts

The point of building tools like AMITT is being able to talk about techniques, artefacts, counters etc across the *whole* disinformation production cycle, not just after the ‘boom’. It also lets us add disinformation to existing infosec alert feeds and coordinate responseWhat we’re doing next:
- Adding narrative and incident objects to STIX (infosec XML message formats)
- Continuing to refine AMITT TTPs
- Continuing to support the Cognitive Security ISAO
- “Red Team” workshop on counters to common misinformation techniques

Main takeaways from @grayspective. Thank you for coming to our online poster!

See also this write up: link
Leave a Reply