MITRE ATT&CKcon 2.0 Day Two

Please stand by for real-time
captions. >>Welcome to MITRE’s Attack Con
2.0 we present our conference MC intelligence lead with minor and
bad guys attack she keeps things on track and give a warm welcome
to Katie Nichols. [ Applause ] >>Good morning, everyone welcome
back to day 2.0 of the tech on to point out we are happy could
join us both here in the claim as well
as online hope everyone had an awesome night last night and in
the Joe Dyess a lot of fun and those were awesome. And hope you
had a great night regardless of what you did and glad you are
back here this morning. Have an awesome day planned full day
including popular lightning talks back by popular demand
including myself it should be fun later today and also return
of the Candy bar which was a contingency of being on stage
and that’s this afternoon you don’t want to miss all of that
started off with a great breakfast hope you got some
smoothies to power up your day thanks to security for
sponsoring and practice. And now we want to switch over to a
bit of a civil real as what’s happened so far in tach on 2.0
>>[ Music ]>>Wonderful community to be a part
of open innovative and consistently impressed with the
quality of presentations I’ve seen not just here but a
conference is all throughout the year and the tech team itself
but coming from the immunity that community . This is our second Attack Con
and it’s exciting to see the companies come to the table and
are willing to roll up their sleeves and collaborate learn
share and we hope we will have many more years continued
growth. [ Music ]
>>[ Indiscernible] a lot of people
the MITRE framework part of the community sharing and
drive us forward. [ Music ] >>[ Indiscernible] you should be proud of what you
accomplished here.>> [ Applause ]
>>Thank you so much to Noel and Julian in the awesome media team
I think they probably did make the reception last
night they were putting that video together so thank you to
them and all of you who were here yesterday to get great
images for that video. And now I will start to kick things off
I am talking about some of the themes from what we heard
yesterday. First when I heard bookends of the day beginning in
and Tony talked about that and how we have to figure out our
own loop to take action eventually and Rich mentioned at
the end getting to the adversaries Kudo loop add rhyme
was watch online but it Kudo loop fan and not here for the
little loop — — the idea of how do we prioritize
whether it is our [ Indiscernible – Poor Audio ]
ores team of one David rocking first presentation talked about
that as well so prioritizing in next one like 30 times yesterday
the idea that we are endless but that means we
are human and approach things differently and have different
perspectives and cognitive biases and how we educate those
we heard a few common themes working as a team, also using
machines to help us out whether it is tram or other data-driven
approaches to hedge against biases and heard false starts so
to speak and nationwide folks but started out they almost
threw it the tach out the door it is not easy always to start
and get tripped up have to persevere and throughout
all of those and throughout the day I think that video just
showed us we have to work together if we will gain that
intelligence advantage as Tony talked about in our awesome
keynote we have to work together about community. That was a
serious side and maybe not so serious side but really
important in Microsoft Excel, is it a great tool or the best tool
ever? The debate on Twitter was spirited. First up Dan call is
here today and some things he uses spreadsheets for and what
side you think he falls on figuring out what food is diaper first is important
rocketship cookie experiment and also check it aficionado and
Baker and appreciated that one and also noting that a follow-up
gave a shout out to Excel certainly one more spreadsheet
of doom a good Excel reference from love this one on Twitter
found a lot presentation Excel at attack on using Excel and the
pun theme and great for sharing data been worked on and original
raw source and maybe not so much and starting to get some debate
and at the center of debate Mark chimed in as well 15 spreadsheet
my team must look at ineffectiveness of using Excel
Mark and I were kindred spirits here and in full set goat was on
the anti-Excel training and hope we get them brought back in
unpopular opinion is good idea of dissent and Intel is
important keeps us sharp. A couple more tweets about that so
goat they give the spreadsheet to create this beer and wine and
liquor influence version of attack maybe using methodology
and this is why navigator so much better than Excel so that
is why and we will see if the Excel debates range on today
also yesterday morning I said [ Indiscernible] Twitter community
you responded in spades you are the best I believe in you and
believe that you and delivered a couple people wanted to call out
Chris on Twitter sent to you are watching and we will mail you
your attack on vintage flag for all of your many many memes and
thanks to Adam for pulling this together and executive will love this one
I don’t always get called on to solve problems for safer world
but when I do I always start with the MITRE attack framework
and if you interview at MITRE bring that hand it over doing
during the interview and more important and all the cat memes
and botnets and shout out to CIS with control so thanks Chris for
that are other kind of meme experts Axel who I think is
actually here and ask him to come up and receive his prize
that adversaries classic meme here, so many good ones, my
favorite one though, and of the day end of the day —
and act so we were not ignoring you we were waiting thank you so
much for your awesome memes [ Applause ]. This is a popular
one the four brains and Dan call a strong meme game and hope you
keep that up appreciate the Star Wars reference and Dan the same
meme more serious note a good summary of Tony’s keynote
appreciate him for that great [ Indiscernible] we realize it’s tough because
Jamie has awesome interviews during break and when you get
your copy for the online folks these are good problems to have
my friends but we appreciate and the feed, this was our favorite
so Jacob if you’re listening send us a DM I was laughing for
a good 10 minutes about this, has Jamie XY and framework — this is the ultimate so thanks
to Jacob for that. Lot of fun yesterday on Twitter keep
tweeting and hashtag and Seymour watch party’s round the world
talk to you from Virginia Beach someone from India keep tweeting
today it is fun for us. And Ishmael didn’t have any ninja cats in
his closing remarks so we got Jedi turtles light sabers and
ninja cats for you on day two on a slightly more serious note
some tweeds to call out I mentioned this as well Bryce
calling out the importance of diversity diversity a strength
and security different perspectives make us better and
talked about biases we have as humans different perspectives
help us hedge against Ed and Bryce acknowledged all of
photographer love and we have making amazing graphics as
result and photographer appreciate at that
as well and Brian Neely could not be here today and but a
comment I feel like summarized yesterday something John Wunder
said MITRE is not providing the value the community is. We
maintain this attack theme but you make it real you all use it
every day and and I felt that was a great way to summarize our
day one and thanks to all of you for tweeting here in person also
great conversations and look forward to continuing that
through our day to. Now we will officially kick off the first
speaker ask her time I know they have interesting stories to
tell. My pleasure to introduce her neck speaker Sarah Jane and
John Gray and community was excited to get the submission
because we often get questions from folks about can you create
toaster oven and have you seen and they want intro and her
during updates a lot on the plate we can’t do everything and
chosen to scope it more narrowly that doesn’t mean you can’t take
our methodology take our philosophy explaining why and what and how
behind the path and awesome take that and apply it to new domains
and Arctic speakers have done that on the domain of
misinformation this is a huge topic on our minds election
coming up in the United States and we were pleased to have this
talk kicking off our day to and Sarah and John will talk about
how they took attack methodology and applied it to this
information lease join me in welcoming Sarah Jane Turk and
John Gray. [ Applause ]>>We took your baby and did things
to it and there is now three layers of [ Indiscernible] [
Indiscernible – Low volume ] [ Indiscernible – Participant too
far from mic ]>>Background I’m sure all of
you know what the coalition is so we are research community
startup 2017 info con event and researchers ultimately people
got together that care about quality and credibility of
online information and believe it’s pretty important to have
simple society. So ultimately it’s really both trading
standards and again credibility and fortunate the credibility
coalition has incubated this project —
>>[ Indiscernible] >>Knock on doors and move her
way through them and there are six different working groups
which we are one currently supported by the
credibility coalition as well part of our project I think we
got a give little shout out to the New York foundation for
funding that helped us move this thing along and in terms of
people involved we have to acknowledge some folks that
supported us as much as last year myself and having a blend
of people from academic government others and a few of
us from other house the companies we also run and [
Indiscernible] overall our big challenge in terms of how we
approach this is thinking through it creating a framework
and working to understand this notion of how communities are
organizing a tax being information-based attacks this
information misinformation Network propaganda and a big part of this was exercise
rigor and try to classify a number of these issues.
>> [ Indiscernible] [ Indiscernible – Participant too
far from mic ] this is not — started way back 2016 or 17
talking about Fake News people typing things putting those out
it’s not that. The [ Indiscernible] disinformation
looking at specifically we are looking at the content
map — MIP and video comes under the umbrella it may be in attribution may be
in groups and maybe botnet is coming to that and also the way
the types of account set up is changing from really clearing
easy to find bots and botnet on all the time and connect for the
hashtag and starting to go get across communities>>The problems we talked about
yesterday techniques adversaries we try to look at this through
the lens of asking questions social cognitive factors through
key actions tactics strategies involved and I will spring
through how we are and how we got here mainly because SJ and
some other folks have done a lot of works and see how we built it
and looking at the time her. So we do have a few objectives and
one of the things we are working for and how we are describing
the techniques and tactics we and moving forward where we are
at is essentially of gone through this
red team exercise and now working towards the blue team
side of things looking to the techniques we identified in what
are going to be some counters and that’s on the agenda.
Really quickly is SJ noted personal some others in space
for some time now feel like a latecomer but sort of a timeline
about working group and time pulled in that just a
timeline the work we’ve done and really what it boiled down to
for us is we have this been a lot of admiration about the
problem but not enough action being taken and we don’t think
in essence the erosion the attack on democracy which we
think about the problem through the lens of actors in the
processes and we don’t think democracy based on 27 years to
wait for framework to wait for proper process and we endeavor
to do so borrowing from her startup and sat down a group of
us worked hard to try to get stuff done and we deliver
numerous papers and done numerous talks and share stories
for ZIP Codes and license plate this is our timeline. And a
couple weeks we will do our Workshop and continue to refine
things as we move through and all fairness a lot of this has
been done as a volunteer project a lot of our teammates people
been behind it and doing this while having jobs and doing
out-of-pocket doing it because they care and the first six
months ultimately we open sourced out and documented what
we identified as 63 different campaigns and will talk about developing
sticks format and really the first six months has been
grading salvation and August we to GitHub and we have this
moving forward and ill be in evolution I think about what I
heard yesterday a lot of you have been in the space for a
long time and have a real head start look at what we call info
set the intersection of misinformation in information
security and let’s say we’ve got to move quickly and got to keep
evolving and also see this definitely a function community.
To move through the next slide, a big part of this impressed hearing this yesterday
obviously issue having common language across communities
thinking through better defending treating creating
better tools and ultimately looking at this through the lens
of a problem happening in scale. SJ noted the machine learning
and AI issues will continue to drive this problem I hate to say
it on bullish on the career sadly to say and we’ve taken
multidisciplinary approach to things that we had to look at it
through a number of different points of view in terms of
misinformation community information security information
operation and recognizing a name or two information security
folks looking at it through the lens of conflict and then again
I think arguably this happened well before the online space
before 2016 but obviously since 2016 events of 2016 being looked
at the social problem and one is information space full of
pollution and ultimately a lot is going to scale SJ will take
it away.>> We got a problem. We’ve got
less coordinated misinformation it looks very similar to earlier
attacks I said at the beginning of last year we are pretty much
at the next stage of development of this and that point of view I
think we’ve gone past and heading over to virus checkers
but we don’t have time to create an entire new response field —
we’ve got a large-scale problem it is level system response we
need to do it at speed and scale and endpoints human beings human
communities who have all sorts of things like we have about 200 all individual
vulnerabilities each of which we can it in different ways so we
got them many scales. So we are looking at the system and
didn’t have the framework you need to go we need — we needed
to combine and we created an info set working group and we
found everybody we could was working on info sack in one
place talking to each other Internet organization part of
that as well looking at how we can human communities you probably haven’t heard of it
yet and and need some messaging format
but we need to connect different groups in some way and get
talking the same language so one of the problems we have or had
when we started looking at how we connect together what we
needed to put together was we had no threats and big list of
— built then we realized we were looking at different views
and had attacker view and the space and what they see
eventually looking at talked about and trying to work out
what an attack as a time-limited thing and we discovered there
was different places and for since 2016 election, [
Indiscernible] two years long different to something like a favorite example so decided [
Indiscernible – Poor Audio ] as human beings base themselves
on stories narratives and belonging on base sense of in
groups are groups who we are now — what is happening now a
narrative tracking and look at and on the Internet tracking
through this and I spent a lot of time — use the artifacts
messages and use the users [ Indiscernible] [ Indiscernible –
Poor Audio ] a bunch of stuff at the bottom
you have to work your way back is it a incident campaign this
is one of the important — track at the level fight it narrative
level the basically get stuff at the artifact level if we are
lucky get intelligence information information at the
campaign level so stuff we built an — looked at models for
marketing marketing models really useful for things like
extremism and turns out a sales conversion
model is exactly what you need when you’re looking at how to do
extremist conversion and we looked at models information
security models operations models so run campaigns can we
use these two — Justice Department had a lovely
one — and looked at information security models and looked at
cyclic models and adored the — attack a love the idea of having
the stages and breakdown — that’s what we needed. You
actually have what we needed. And we tried really really hard
to get this information to the attack framework as it existed
could not make it work and so we tried we really tried and we
think actually ultimately we’ll end up with branches doing our
thing your thing and of doing hierarchical things and — for
the moment we are different so we — we didn’t have these catalogs
and there were pieces of catalogs over the world and have
one nobody had a standard for and we built a standard we put in extensions and and this is one of them useful information that does not
have a lot of luck anyway in France — one of them resilience cancers
stash busy collecting and useful so
this is what the datasheet looks like an example if you’re on
Twitter you’ll see this as a sticks graph and a small example
one day people chemical — [ Indiscernible]>>[ Captioners Transitioning ]
>>>>We learn from you. This is what
one of the data sheets looks like. If you are on Twitter, you
will see this as a STIX graph. >>There was no fire, we’ve been
seeing these since 2010. Kate at the
University of Washington has been tracking those. We started
looking at techniques. This is one of the techniques. We brought all
this together to head forward because of time. We built this.
This is our baby. It’s not as big as a
text because he bolted off 22 different incidences. There were all in there were an awful lot
of post-its on the wall. He pulled out tactics and
techniques from 20 incidences, we posted it, we grouped them, we went through
stages. Then we looked at all the other that existed including
attack. We went and said to these things. This one may help
more, these are the stages we have. The big purple and red,
those are phases. While we were looking, we realized there are
other pieces, models we could include. The planning stages
came from psi ops. The evaluation came from advertising miles. If you have
an incident that part of the campaign, you will run it but
you also check what works to feed into the next incident.
This is not pretty but near the blue line. The tactics, the
techniques, I will give you the address at the end. I suspect some of you will
probably go back and think we can do this too. That will be awesome. This is where we are
at the moment. We’ve already had people starting to use this,
starting to pick out techniques, starting to share techniques used. The next thing
we started doing is the next part
of this. >>We did the mapping between misinformation and
AMITT. We got almost a one-to-one
between the two. We can carry misinformation as long as
we put the incidents into sets of narratives. The diagram I
have put up, it would be nicer to have two separate objects
instead of narrative. It works, it’s
beautiful. We can match that and also means we can match the data
science to the work and we can talk to each other. We can do this, we
can talk about threat actors, finally, we can do this. We can start doing the
science. It just becomes cleaner. At the moment
there are email reports going out on misinformation. Now we can
start doing the real-time fees.
>>This is where we are going next. We have a blue team
workshop coming up. Just as you were picking off all those
techniques, one by one we are doing one by one technique
counters, tactic level, process procedure
and anything that’s interesting or weird on top of that. Would
like AMITT or some form to be in the misinformation response
centers. We’ve been talking not just in this country but to
other bodies as well. We would like to start testing. We have
been so busy wrapped up interest trying to get a standard going. Find the response populations,
and the people who can use this. It’s one of the group
sites has a page in their, there’s a get help the repo.
That’s us. [ Applause ] I talked a little faster than I
thought so you got loads of time for questions.
>>>Questions for our presenters? I forgot to think I forgot to
thank Roger. You have been doing great work for us.
>>I think it’s great you’re using lessons learned and apply
towards this important problem. One of the lessons learned early on was on the paper call trusting trust. Using web
security as an example, we found we came up with the certificate
transparency. What you think are the equivalents of establishing
trust and things like certificate transparency for this problem?
>>That would be the other half of my life. I work on the global
disinformation indexed. I’m working on trust certification for pages. Some discussion, I don’t notice
much difference if we can pull this into the community. And we use the
things that already exists. >>We have an interesting trust
problem. What we work on is all about humans and hacking, it’s
social engineering at scale. That makes life more
interesting. I don’t think it’s that different.>>One thing that hit my mind as
you were talking, stunning work first of all, I talked used
yesterday but this blew me away. One thing I want to ask, have
you looked at creating something similar to a system. Last year,
at the elastic con in LA, a company showed how they
were utilizing elastic to conduct an influencer company. Where they
were basically, the slogan was is there is an influencer for
that. I can imagine you look at influencers.
>>Oh yeah. I’m also volunteering. I think that
may be the next step of adopting the info Secretary model , going into having an
analytical tool that can start aggregating, playing with your data. I was wondering if
you would be interested in something like that?
>>Please. Please. We also need an aggregator connector as well.
>>You got a STIX and that makes life easy. >>>>[ Participant
comment/question off-mic ]
>>>>I love the presentations and
people volunteer with the health issues.
>>This one is on. I’m Emily Fry, we want to make sure
we are connected to you. We lead the election integrity initiative here. We’ve done a
great deal of work that logically should be connected to
your intellectual capital and we would love to talk to you.>>>>Thanks so much for
presenting. It’s fascinating work. My question to you, not everyone works fighting
nationstates for their day job. A lot of this is regular operations within
companies. How can we contribute information through something, what sort of
information and channels are available to contribute back.
>>We are working on that with the ISACs. This isn’t just a
nationstate problem, we have party sentencing misinformation
text against organizations let’s hitting for you and fast. It will become
your problem very soon.
>>We currently live in a world where there’s lots of news
outlets in the narrative is very difficult to control. We are
coming from a world where there are very few in the narrative
was very easy to control. How is this worse.
>>We have a whole talk on that as well. It’s completely
different because the channel is completely different. The
Westphalian contract is completely broken. Principle
interference between nations is broken, individuals can outdo
that level of inter-Terrence. We are in a new world. We have a
whole bunch of new worlds and new rules to start building.
>>When you think about the an amount of anonymity, one thing I
look at is replication of a certain state broadcasters contents that gets
no attribution, that gets spun out on imposter sites. There’s semi-different issues, the
supply side of this thing is dramatically changed.
>>The two people who track on this, Pablo Brewer has done a
lot of work.>>I have an offer. This is
interesting a great work. I know couple people who know something
about STIX, if you need help from the STIX community, is like
you’re doing a lot of great work . We would love to see some
enhancement proposals around this work to get it operationalized.
>>>>This is more of a thank you
for the work you are doing. Speaking on behalf of the team,
we wish we could do more to help you.>>On that note, please join me
in thanking our speakers. [ Applause ]
>>What a great talk to kick off the morning. I heard some names
that even across different domains and
industries we have commonalities. I love the
metaphor that they’ve gone beyond cuckoo’s egg and how info
Secretary can have overlap on domains and research. One thing
John said, there’s a lot of admiration but not enough
action. That is also a commonality. One thing that’s
exciting about us here, we will leave and take action on what we have learned. I want to give a
call out, we have folks joining us online, Barry Anderson said
it’s night, he is in Australia, Mumbai confirmed they have
checked in on Twitter and another Australian, good day from Australia. It’s always
exciting to see. Moving on to our next speaker, we talk about
diversity, it’s not just race or gender or socioeconomic status
it’s also diversity of backgrounds. We were so excited
when they got this next submission for more of an
academic perspective. Chris is a CTA lead with a government
agency in the Netherlands. This’ll talk about how he
brought his academic perspective and applied it to the practice
of understanding malware behavior. One thing that
caught our eye was looking at 950, over 950 unique families as
part of this project, a review of over 15 years of malware evolution. Kind of a
massive sample. Not easy to get that many much less do research
on all of them. Very excited for this talk. Please join me in
welcoming our speaker. >>[ Applause ] Basically, I
investigate Windows malware and planted the
results and this gives you a very good impression of how to prioritize in your mitigation.
>>I did all of this is part of a cyber threat lab. In daily
life I’m an analyst on intelligence which is the
largest public-sector provider of credible critical infrastructure. A quick index, before going the
results I would like to give you some background on the
organization I am working for. This is heavily affected how
this research came about. The Dutch have been living below sea
level for hundreds of years. Most, Amsterdam is four meters
below sea level, some parts are even six. As you can imagine the
risk is immensely high. With an overflow, I don’t mean a buffer. As a form of risk
mitigation, we have built the Delta works. We have decreased
the risk of a flood striking. You may ask why
am I telling this? Because if you work in an organization like
this on a daily basis, this affects how you look at cyber
defense. These Delta works, basically, it’s a physical form
of perimeter defense. As we would like to call it, a physical kill chain against a threat. This is the Lockheed
Martin cyber kill chain. When I start, I saw this model is
gaining a lot of traction and a lot of attention within
academia. In contrast, the framework Ft. I
didn’t see that much coverage, I’m not going to compare this to
the technical area today. I decided to give the academic
attention it deserves or at least make a decent start. Unfortunately, you
cannot just go about writing a paper saying adopt this belief, it’s great. You should do
something with it. Long story short, I studied, analyzed the
Windows malware and indicated results in a research paper,
using the uniform language offered by the framer.
>>Our methodology, we got our data sample which is repository meant for the
institute. This is a curated data set which means it provides
several quality assurances over randomly picking your samples. It is label data
set which means it provides metadata from our families. Offers for most Mauer families
are offered multiple samples. We ran the samples offered by this
repository to select the most recent for each. Then we ran that sample through a sandbox
those offered to us by security. They have an online
sample for automated malware analysis which can be regarded
quite repetitive for most of the current market . We also developed manual
plotting of all the activity on the attack framework. If you do
all of that, you will get this. This is the matrix of the
framework as it was before the April update. Timelines in
academia are quite long. I know most of this will be difficult
to read for most of you. This is available in the published paper
available for download and also available as a separate file.
>>What I will do now, I will quickly go to the most relevant
results of the paper. There is more in the full paper but this
might give you a good impression of the direction in which
certain techniques are heading. This may inform your mitigation. With the slide
format, the down arrow indicates a decrease in use of the
technique in the data set covered by the time frame. In the green apparel
is the opposite. >>The techniques used, the most
observed technique in our data set was execution. While this is the most observed
technique, it is also decreasing of time which means it is
getting replaced by other techniques used in the execution tactic. One of the other techniques
gaining traction is the factor that shouldn’t be surprising to
you of course. What we found interesting ever see the
first coverage around 2017 and this is also the first year
which is also observed in the data set.
>>There are basically five techniques released out here
with significant increase for sideloading everything years. And then
discovery. Discovery is interesting. Most malware
does some form of discovery just like most benign
applications. As you can see in the registry technique. This is interesting because in daily
practice this means it can be pretty difficult to distinguish
malicious discovery from benign. It relies mostly on native
operating system functions. What we saw is a significant increase
in malware using security software.
This includes looking for viruses, looking for local rules
but also more advanced and verticalization. This is
some frame in which this could be inferred. Command control is
typically one of those techniques that can be difficult to observe. This reflects in our
results. I would like to discuss today is specific implementation
which we saw applied for control. Are you still awake? You probably say that the technique
primarily being used in the escalation and defense
technique. If you look this up, you will see they refer to a specific invitation mainly these
uptimes being used for control. We found this is true. What we found is this technique is applied by
more sophisticated samples of malware . After 2015, it is adopted by
what you could call commodity moat and quickly
dropped the sophisticated ones.
>>You might ask how can you say this? Data sets are label data
sets. It is attribution based on reporting so for most families
they refer to five or six open-source reports. Based on
that data we are able to attribute malware that only to
actors but also to categories of actors. Based on this, we are
able to see this pattern. There are several theories which
might explain this but for us it was interesting as its proving
the hypothesis that more taxis are brought in the conflict quickly adopted.
>>As we move towards the end, this is what I would like to
leave you with. And has already been set during this conference that attack is
great for CTI. Undeniably. Is perhaps this
reason you see more and more vendors in the market implementing automated matrices
within the product like we did manually. I would like to argue
here today that tech is great but using it in this form does
not make it CDI. Was such an automated
area may come across as a trustworthy verdict of what this
is capable of, there are limitations and we call these
biases. These tools can only see so much in their are biased
towards that in the presentation of their results. This goes for anything in threat
intelligence, results depend on the capabilities of the resource. You need to
account for each of these biases. Otherwise you will get
mistreated further down the road.
>>Very important. Look at what something in tell you but above
all, what they cannot tell you. Basically, we figured automated
analysis is great for CTI but unsuited as a source for CTI by itself. Stay safe, stay
critical and be aware of your own biases, also the biases of
your toolkit. >>Thank you.
>>[ Captioners Transitioning ] >>THERE WERE APPLICATIONS. How much of
that is everyday software? >>Yes.
>>Sounds like an area for future research.
>>With that, we are almost that time. Please join me in thanking Chris. Tools can help us but we have
cognitive biases continuing the theme on day two. Our next
speaker is going to give you something completely different.
In case you are not awake he is absolutely going to wake you up.
James is a self described security guy at Titania solution
group and has a talk titled taking the approach of talking
about some of the issues when using attack it could be useful
but there are a lot of ways to misuse it please join me in welcoming
James Leroux. >>[ Applause ]
>>Are right, the sound is good. Good to see everyone
here. I had the pleasure of speaking last year and this year
everything has grown. Hats off to the TAC team. I cannot
believe what you get away with. It is amazing two point really
means let’s take risks and go twice as big let’s really blow this up
because I represent the salty guy that keynote speakers
were talking about I am the guy who
has been doing this a long time with my head in the trenches.
How I talk about that and be critical and you see the
Dungeons & Dragons reference. I will turn
it into a game. It is a crazy idea but I think security needs
crazy ideas. Don’t be afraid of trying something crazy
sometimes. Quick disclaimer before I start. I think I am a level 8 security
guy now. If I put on the Cape — hold on, do you not
believe me? I have a medal for cyber security excellence. Now
that my credibility is established. [ Laughter ] , do
not call me Bobo the cyber clown so, what am I going
to do? My perspective is a little bit different than your
CTI folks. I have been under the surface and every once in a while I come
up and go everything is changed. It makes for a socially awkward
weirdo but Halloween is tomorrow so I am allowed to do this. So, who is security guy? He likes to
think he is James Bond Sean Connery type guy who is always smooth
and knows what to do in an emergency situation but exit up with math likes to do so in his
free time in analog measures, it is okay I have tools for that. I
can process analog and digital and when I take my day off I
will camp in my tiny house. You can come to me there and asked
questions. How to actually see myself? I am Mr. Bean sometimes, driving the car
and disasters happening around me. It is almost like
fear maybe some people share this. I get up on top of the powerline . What does this have to do with ATTACK? That’s what I like
about ATTACK, it gives you something to start with for a
guy like me to talk to, a person who is interested in threat retail. What happened a
couple years ago is moved to a farm. It was by choice. It was a real [ Indiscernible ] I have the
Hughes net. I do not know if you are familiar with satellite
Internet. The modern web is not built for loading websites at
2000 ms latencies. My overwatch team. I have a theory
about myself. I am at a point where I start my career with ambition and as
time goes on I get saltier insult earlier in winter and
winter and all of a sudden I’m looking at my career in a past window doing it by
myself. Maybe I’m on my way to being a hermit farmer so I did
buy a farm. As I am trying to convince you I’m using this tool
called beautiful AI it only loads in chrome because
apparently Firefox is a modern browser. Kudos to USENET. As I am waiting
for stock imagery to load this is the tempest up I am doing. I
am looking at the JavaScript in the things I used to do I am
looking and that stream looks like 64 encoding or non-encoding or a
hash or a squirrel ADD mentality because when you fight fires for
15 years you get dropped into different situations. It is
almost instinct to do this stuff. What am I going to
do? You remember the old screenshot capture fold folders?
That is me in VR. I do have some type of audience. If you see
here and you can get what I’m talking about there are dozens
of us. [ Laughter ] this came up on a tweet. Language makes us God’s. I’m always worried
about getting takeout of context when I do so I go to the point
of markups and disclaimers but sometimes you have to speak your mind. But you have
the audience, it is three point your mind. But you
have the audience, it is 3.2 million subscribers to MITRE.
Maybe they could hire [ Indiscernible ] so what is the
point of my talk? That is the type of thing that
happens to you when you are eight Celtic proclaimed
technical fire fired. People make excuses. People talk about cognitive
environment and you said yourself is not muscle fault
because things are complicated, really what’s the point? When
you look and ask me about AMITT trend MITRE ATTACK attack
charts it is not. If you want to pay me
money my motivation is going to light up green. I’m going to do
it even if I took myself into doing it. I’m going to do it.
Here I regularly Google stuff to see what it means to regular people. So I googled
cyber security heat map . The bottom is a blog post from
2010 defending the risk matrix. This is not a new conversation
I’m having. It has been forever. This is how people
think of algorithms. I look up algorithm on stock
photo which is awesome. This is what comes up. People think
about the matrix in that movie came out in 1999. I think of algorithms completely
differently than you might is my point. Tribe role-playing is character. Rolling dice critical
strike I hit you with this technique what is that mean to
you? Communicate with people. Don’t be afraid to write a
unicorn and stop to talk about stupid things. What changed in my life to give me these
perspectives and things like that? A lot of it to me
personally is listening to audible. I started doing that
on my commute years ago. It has been five or six years and then
you look at your library and you read 200 400 I don’t know because it
is 15 at a time and I have 20 pages. Not to brag about how
many books I read that is not the point, sorry I did not read
it I listened to it. People who get upset about that but these
are the types of people who answer questions that I
have. So, if you are having these questions about what do I
do to communicate and how do I apply the ATTACK
framework do not go into crisis and start blabbering on stage
with the wig on because that is what you end up doing. Today apparel, anyone who he is? I got a PhD in my other one came
from stack overflow. He is a awesome guy credited with coining the toy and
Bayesian network. So when he writes a book I read it. He is a Holocaust
driver still doing this at 90. I will be on the farm by then.
Based versus Fisher. I often looked at heat maps and go, why don’t I
like this? Look at the history of tobacco Sir Ronald Fisher, he
is a genus or was that Rev. Thomas
pays combat days go back to the 1600.
>>Jordan Ellenberg also fantastic book. My point? My
point is the perspective outside of just our niche industry. Cyber security,
security, but really information theory. There are mathematics
behind these things. I am not the guy to teach you demand math but maybe if you
talk to me about ideas in a way that is accessible then we can
come to a system that works. Questions? I know it is typical
to ask questions on. What is happening? [ Laughter ] back
that’s a good question. We will do Ivan and then avid Adam.
>>Somehow I have been extensively talking to you this
weekend week and —
>>It is called social engineering. [ Laughter ] back
what I wanted to ask >>Answer my first question but
what I wanted to ask was careerwise, would you suggest someone
specialize in something or would you suggest they try to take a
multitask? I was talking to you earlier, I have that problem
where we are seeing the trend toward specialization
and some of us like the outside and also like doing and tell in.
I was wondering what your perspective is?>>For me with how I have
learned the worst is most is put myself in
uncomfortable positions. I see myself as an ambulance chaser. I
go to where the problems are in trying to fix them. To be on one
of the biggest lessons I have learned is you cannot do that
forever. You may be able to but I cannot do it forever. When you are specializing it is fine, you
have so many resources available to you today. Every once in a
while I will watch YouTube live overflow. Dude is teaching me
how to reverse engineer x86 while playing a videogame. What? How do you do
this stuff? There is so much information available today
which goes back to the other talk, the bar is lower which is
important so keep doing what you like to do but don’t stop taking
care of yourself.
>>Great note to end on. Please join me in thanking James. [ Applause ]
>>You want the torch? I will give you the torch.
>>Passing on the metal. I like it. Rice and said James woke up
the audience a little bit. Walter on twitter said if you
are not watching right now you are
missing a great talk and he wants a wig. Other tweets happening
online Richard and the audience was tweeting a great reading list check out all of the action seeking perspective outside of
your own round. Thank you to James
working but my pleasure to introduce Otis Apple Alexander a leave security engineer here
at Sunday. He ton and I said yes. ATT&CKcon team is out
there we so often get the question, when is ATT&CKcon for
ICS coming out I say you have to talk to Otis. I am excited for
him to give us up AIDS, challenges and please join me
welcoming Otis Alexander. [ Applause ]
>>I am going to really consider getting a wig for my next
presentation. It might help with my confidence. Not this time.
Either way, I’m going to give a quick update about ATT&CKcon for ICS.
I will cover some of the typical things I get asked
about. Why ATT&CKcon for ICS? How does that fit into
frameworks? And when is it going to be available? Why ATT&CKcon
for ICS? We can think about it from a couple of different perspectives, ICS
industrial control systems are behind some of the world was
critical in production. That is a reason
in itself to look at how adversaries are affecting the
systems. The other thing is, as some news and then do want to
have a deep knowledge about how adversaries are affecting
the systems. They want to know about the trade calf craft, technologies,
and use that to help improve the printers. Another thing is apples have
behaviors in these environments. The environmental unique in
themselves and I will go over a couple of the things that we
have to consider adversary goes. One of the things AB is trying
to do in this area are trying to disrupt some delivery of
services in conjunction with the industrial process. They may
also try to cause physical damage to pieces of equipment and close — that is important
as well. Some of the things we do not want to happen are the
catastrophic failures. Things that affect property or human life. Another thing we
have to consider is the technological differences
between ICS and traditional IT systems. While IT is present
in ICS we also have things like controllers interfacing directly
with pieces of equipment that run the process. We also have
specialized applications and protocols. These things are
taken into account at any point in time and allow us to make
changes to the system. These are important things we have to take
into consideration. There are different ways of defending
these systems. We are less mature in these areas. What are
the data sources we need in order to defend the
systems? Where do we collect them from? If we are dealing
with low-level embedded controllers they do not have the
same interfaces to collect information. We also have to
think about the protocols. The last thing I want to
highlight is if we’re thinking about mitigations or security
controls we do not want them to affect safety. We do not want to
be a part of the problem. These are a lot of unique
considerations we have. I think is why we want ATT&CKcon for
ICS. And when I went about control systems we have to think
about the asset owner or operator’s perspective and what
their complete system looks like. Representation of the functional levels of a asset owner system. This
includes you will traditionally call headquarters or something
like that. A enterprise system high-level applications.
Operational technology industrial consortium where you have supervisory control. A
way to look into the system and control the system. And local
control that directly interfaces with these pieces of equipment.
So, looking at the traditional adversary behavior it ranges
along the whole model. It is not like
adversaries are always just getting into supervisory control
or low-level systems. Most of the time they are making their
way through the enterprise systems and down. IT, literally
spans the whole model. It spans from
enterprise systems all the way down to basic control where you
have embedded windows and things like that. That is where
enterprise ATT&CKcon comes into play it explains the entry points for
adversaries coming into IT or enterprise systems. It also
explains the conduit they leverage to get
up into basic control or the area they need to be in the
industrial control systems. We cannot forget about the special
goals that adversaries have at the low levels. We cannot forget
there are protocols that are unique and unique platforms
in the embedded systems that interface with the equipment. We cannot forget
that there are specialized applications that operators use. That is the focus, to complete
the story of the adversary behavior from enterprise all the
way down to the low-level systems. What we’re doing is
trying to figure out the interface between
the two models. We do have some solutions and you get the full gamut of
the systems. What are some of our use cases?
>>Will have current ones we are currently working on and
utilizing and we also have future ones we want to look at.
In general we utilized a lot of the same cases as enterprise.
One of the big one was standardized information
reporting. We have so many reports for the same incident
that reports in different ways. Sometimes it is hard to get a
sense of what people missed and what is common. A standard
language for looking at these reports or if security practitioners use
standard language to report out it may help the community. A lot
of these cases center around the analyst. Enabling the analyst to
do their job better is still a new area. When we talk about OT
socks and they are having a hard time finding people who have the
expertise in this area. Attack can help them in their jobs. One
thing is hunt incident response and playbooks, analytic
development, analyst training, and utilizing adversary innovation. Testbed
environments to better understand what the artifacts
are left by adversaries. To understand
how well our defenses are doing and things like that. And other
use cases criticality analysis. When we’re
looking at a industrial control system we can better understand
how assets, functions and [ Indiscernible ] linkup to the
mission and how a adversary utilizing certain technic tactics technic and procedures
to better utilize that. Too bad it gives students a what we need
to focus on in the environment that affects submission. A
future use case that we are really interesting in his
modeling at the design phase. We have been talking to device manufacturers and one of the
critical needs in one of the things they have been using ATT&CKcon for ICS for is to
help untrained engineers understand what the
ramifications are for adding features at the design process
for low-level embedded controllers. So they can
understand what vectors are opening up and TTP’s associated with
that. And what mitigations can be used to address those things.
>>Challenges. We have had a lot of challenges in terms of
working with this framework and building this framework. One of
them is a lack of real word data. There is really good stuff
out there there is detailed information we are able to
capitalize on for this framework but, in general, our lights on
we do not have water and generally it seems
stable so it does not seem like a lot going on. That could be a
number of reasons. Maybe nothing is happening. Maybe we have no
visibility. We don’t know what’s happening. Maybe we are bad at
root cause analysis. Or maybe we are
bad at information sharing. We are
hoping the ATT&CKcon model for ICS will help get people talking
more about what they are seeing in their environment. Also to start
thinking about visibility so we do catch what is going on. A big and use the proper level
of abstraction when dealing with multiple domains. We look at
electric power, water systems, oil and gas, what
level should the techniques be at to best explain adversary
behavior over a broad set of demesnes and then a diverse
set of protocols and vendors? How do we find the proper
extraction layer? We have been getting feedback to help with
that. And then, scope, do we focus on adversaries and what
they have done to certain domains. Do we expand that to
ink about how it affects different domains? Do we look at
the whole cyber physical system domain in general? Any cyber
element that affects physical — the physical world, is that
how far we want to go or do we scope it and be more refined?
These are all challenges that we have thought about and are
addressing with the framework. We have gotten a lot of good
feedback that has helped us in terms of our tap development categories we
think are important to describing adversary behavior.
Initial access is one. We want to know how the
adversaries are making their way into OT environments. We want to
know how they are inhibiting response functions that deals
with safety, protection, and quality assurance. Those are
mechanisms that should be in place [ Indiscernible ] how do they
disable and appealing physical processes to cause ultimate
impact like loss of control, loss of you and loss of revenue
or productivity these are all things we are considering. We’ve updated the attack matrix
to reflect some of these new tactics and we have added
techniques to address how adversaries are accomplishing
these goals. So, some things people ask me first, I will
address it now. What is the status of ATT&CKcon for ICS? We are
currently in our third major division. Was trying to
highlight changes made we actually have individuals from 29
organizations who have early review or ask us to give us
initial feedback about the model before we get three
public release. These organizations range over private
and public entities. We are getting a lot of different use
cases and feedback from a lot of different people. We are
planning for release December of this year. It is available to
the larger community to collect feedback. We want to do
a independent release where we’re not on in a [
Indiscernible ] just to help facilitate rapid response to
feedback we anticipate getting. If you want
access before that time we are currently allowing that via MDA. If you
have any questions please let me know. We are excited to get
this out to the public so that you can start using it and start
providing feedback and we can mature and refine it. Do I have
time for questions?
>>Please join me in thanking Otis Alexander. We will push
questions into the break. [ Applause ] this is not a easy
space as Bryson chimed in. ICS is inherently different than IT.
I see ICS folks nodding. If that is your
area of expertise talk to Otis, Ree and others, we are here to
chat about that. Now that leads us into our copy break. Ready
for some coffee? Thank you to Cisco for sponsoring this break. Be sure to visit
exhibitors and we will have Jamie Williams back today. Dan Pravin online. We were
handed over to Jamie so see you back here in about 30 minutes.
>>[ music ] >>Welcome back. I appreciate you anticipating in ATT&CKcon. Sitting with our
first figures in the morning. I was about to get a copy and the
daughter watching your talk and it was all the caffeine I
needed. It resonates with a lot of us. Sarah and John from the ability coalition. How do
you feel? >>I had only just formed coffee
myself. >>I did too but I had to
refrain from the third. A chance to kick things off in share
something that is important but a little different take compared
to what went down on day one.
>>The title was misinformation threat sharing. One of the first
things you introduced that resonates with a lot of us that
maybe you have not heard that term, cognitive security. Do you
want to go into a little more detail about what it is for the
audience? >>That term comes from Ryan Waltman. We have been
talking about misinformation and disinformation in the idea of
take information in the idea of large-scale coordinated attacks using information. We say it a
lot because we want people to focus on the thing you want to
protect. We were talking about setting up the cognitive
security eyes out and originally the people talked about
misinformation. You do not want to name an ice out for the bad
thing you want you name it for the thing you want to preserve.
The idea of you have cyber security and
physical security in the idea of cognitive. Looking at
people’s belief systems and belief systems as
the community to be the endpoint. Not just band PCs and networks.
>>We are familiar with trust and social engineering. We call
it level a and do not go into more detail. Really bridging the
gap. The mysterious world we never really talk about. On that theme you
are talking about making this practical and bring it back into
the community. You mentioned red teaming and bleaching. What are
the exercises like in your space? I am actually curious how
that would work. >>It would mess with a lot of
people’s brains. >>You want to talk about the
red team? >>We did a workshop in Atlanta and got some people
together and as SJ referenced in the presentation if there was an
award given out for the most thinking is given out sticky notes put up on
the wall. A process of all the campaigns and incidents we have
worked on and documented. We were still down to 22 in terms
of the data information we had. Then come down to what we
identified as all the different technique. There would there was a process
of matching the techniques with different phases and stages. It
was a good exercise to go from this
campaign to these are the things we have seen.
>>What we’re trying to do is match what the person running
the misinformation campaign is doing. What are they doing in
terms of planning the campaign or running the camping all the
way through to wrapping up and learning from it. Blue team is
what you doing in terms of defending against us? What are
we doing next? Ways to defend against misinformation. Both at
the technical level. To defend
against an entire campaign.>>Speaking of these campaigns
and going a step back you mentioned a big target of these
spaces. Democracy and extremism, do you think it will hold true
overtime? >> Totally evolving. Not just the destabilization but
attacking the democratic process itself. You are attacking
systems. We also seen business being attacked. We are seeing
misinformation attacks against individual organizations and
misinformation as part of hybrid attacks. We are seeing combined
cyber misinformation attacks becoming part of the attack it has.
>>I appreciate what you didn’t trying to work with attack. Want
to give you a second to plug the framework. There is also
available on get help or [ Indiscernible ] . It is a long
address so we set up a cycle that then has a link. If you look under
history there are papers under the history behind it. Including
the STIX work we literately just done .
>>Checker twitter feed for more . Going forward, enticing the community .
>>If we get the SNPs through we will suddenly have the ability
to build connected data sets and stop putting information into
the format start playing with and building things. We think that we would be able to
use all of the tool bag on misinformation so go wild.
>>Want to get into the stick format you can start processing
other enterprises with that cyber to physical relationship.
Thank you for your time, we will be right back
with our next presenter. Chris who talked about malware
sampling and attack. Be back in a moment.
>>[ music ] >> Welcome back and I’m here
with Chris. How has your ATT&CKcon experience been? >>It has been great. Especially
like the quality talks over here overall it has been a nice
experience. My favorite talk so far but sojourn?
>>I think it was from Tony Lambert.
>>Yesterday. >>Yes. It was awesome. Your
talk, you sampled 900 samples of malware it was classified as a strong solid set of data. You
ran into your algorithm and downtrends, do you want to speak
more about them? There was interesting to about execution,
discovery and malware. >>Basically the tactics of the
ATT&CKcon framework archers are dropping techniques . These new techniques proved to be more
efficient about other tactics the techniques were anesthetic.
Like discovery at talk about.
>>There are only so many ways you can do it.
>>Yes and it overlaps with benign activity. So basically to
stay with the big same capabilities.
>>Do you think this innovations and changes are based on
offenders or is it like a hybrid ballpark of this is how
the world is and this is how the world grows?
>>I think it is a cat and mouse game. This is not the
presentation but you what you can see is the adoption of some
techniques goes hand-in-hand with the leaks in certain
malware. When state-sponsored to act or what we think are
state-sponsored actors than it is quickly adopted by others.
This may be one of the reasons for which this happens. It will
showed in the presentation. >>Of the 900 samples did you
map that out and the timeline and put significant
cyber events like code releases or major cyber breaches? How
widespread is the sample size of data?
>>It is quite spread. It ranges from first-year coverage 2003
and the last year being 2018. The coverage is a bit thinner in 2003 than 2008 but in 2003
there were less viruses.
>> Speaking of like you said, your
work, you mentioned overcoming biases. From 2003 2018 we saw a
much different platform and arena. As producers of the
community how can we enable more research on your end? You depend on the
really strong data set to get what you needed to do but what
can we do to help that research? Watching twitter everyone loved
your research and we would love to do more data your way and
learn more of your methods.
>>If you want to do research like
I did it was representative of the last 15 or 20 years. It will
stay difficult to find malware that is 20 years
old. If you have it it will be more difficult to for example [ Indiscernible ]
because the infrastructure is out of date. It is a difficult
problem to solve but sampling malware is basically a big
challenge. Most sources where you get your malware from our bias. We talked to certain
vendors willing to offer samples to us but some of them have a customer based to a getting
North America and not the rest of the world. These are
difficult problems to solve as a researcher. You wanted to be
representative for the whole ecosystem. >>These are biases, they are
okay but you have to be aware otherwise [ Indiscernible ] >>What is next? Any big
takeaways you were not able to inject into the talk or opportunities you want to
challenge the community to pick up in your work?
>>There are lots of stuff to pick up on. There are some many
parties happy to be involved in the follow-up of this research.
I think it will take next week to look at all the endeavors.
>>You mentioned in the beginning there was in
December Mac how or what about ATT&CKcon made it work?
>>Is starting from a practitioner point of view. I
was already using ATT&CKcon at the organization. I was apprised
to see you did not receive that much covered in academia so
basic basically academia is a few years past practitioner research. I was apprised and did not get
that much coverage. Moving on I would like to give tech more coverage.
>>As always feel free to contribute and send your data
in. We love trends. We do not have a
lot of time or resources in the data to do that type of work.
Anytime you can send it back is greatly appreciated. We will
transition to James Leroux a fan favorite and self proclaimed bubble the cyber
clown. In the meantime, keep tweeting, we are loving the cat
meme’s, it is almost Halloween. Anyone have any spicy Halloween
meme’s we appreciate it. Special guest
actually, this is a surprise. >>My name is Katie Nichols. It is a
pleasure to meet you. I figure we have been talking to jet across
cameras so I wanted to come say hi and you are doing a great
job. When you going to start your talkshow at night?
>>Tonight. Right after the nationals game seven. Me and
Alan were talking. I was I am a brave fan so was hard to watch. I tried to hide the
fact from the community but deep in the heart Atlanta Braves.
>>What has been your favorite part of ATT&CKcon ?
>>All of that. All the different perspectives, learning
things, the list goes on. It sounds sentimental and cheesy
but I have had a good time. >>It has been fun. Jamie and I
worked together on ATT&CKcon and
ATT&CKcon emails and now we work together hosting people on the
couch and upfront . It has been a lot of fun. I wanted to give a
shadow to you in the awesome work you are doing. It has been
a great time and keep it up. We are almost dumb with day to. I
wanted to get out to the rest of the team, thank you to the folks
doing work fine the students. I just wanted to sit down and say
hello. >>The Countess company,
>>Was coming? >>I just got Lacroix to.
>>I’m not checking you it would go badly. You may be familiar
with their recon they make them drink warm Smirnov ice. Adam and I are introducing for ATT&CKcon
2.8 Lacroix and people. That may pop up later.
>>As long as I can get a I think amen.
>>You will regret saying that. >>Good luck.
>>Our final speaker, obviously a fan favorite.
>>No pressure. >>Great job. I appreciate the
change of pace. The different perspective on things. Life on the farm.
>>You live changes over time and sometimes it is good to
reflect. That is what I was trying to get out of the top. It
is hard to convince someone of something in 10 minutes. I myself in horrible
places. Hopefully I am spied someone.
>>I think you did. Checking Twitter I think people resonated
with what you are saying. Especially in the trenches, I am an optimist.
It is easy to be the fault of the people before us. The reality is if you are not doing
something are not keeping up because the field is not keeping
up I do not like that . It has not gotten any better.
>>Speaking of evolution, the education and
the value of that. Do you want to speak on bringing new people
into the field? As practitioners we think we have been doing it
for years but obviously learning never stops.
>>There are people from all walks of life that would make
great cyber people. Everyone has a story. It doesn’t
really matter but it matters moving forward to put yourself
in challenging situations. That is what I do and where I learned
the most. If I go somewhere and everything is
fine queuing alerts and reporting, if you get complacent
it is time to shake it up and figure out what is next.
>>That lines up with what Richard said yesterday. You have
to identify problems and relentlessly attack. Some of our
goals may not be achievable but if we keep pushing it we get to
the point to where we are better off.
>>Absolutely, if you live that sort of lifestyle you can almost cringe when people say
lean, agile, [ Indiscernible ] because if you stay in long enough you see things go in
circles. It is all a little bit of history repeating.
>>In my career it is always been we don’t have enough data
and now we finally get to the point of wow, what do we do with
all the data? Any ideas? Look at what we did five years ago. I encourage anyone
interested to grind away and learn. Put yourself in those
difficult positions. You are going to fail but failing is
something that happened. >>Fell hard fail fast.
>>I am still on stage doing dumb stuffed after years and
years. >>Where do you see the field
going? Evolution and taking on heart problems?
>>I think it will be a societal change to some extent. We are
building upon a utility we do not understand very well. We
understand in some ways but the freedom of
information, if you look at the printing press and
how that changed the world, I am sure many smarter people have
talked about this but we are playing 6 degrees of Kevin Bacon
and as time goes on we are two and 3 degrees of Kevin Bacon.
Humans are not quite used to that. Maybe societally or maybe
societies are much better like Estonia that was birthed in a
digital age for inspiration. Sometimes you
have to look outside of your Bible.
>>You said that level of building society around
technology and getting crippled.
>>There are so many names. Sometimes I feel like a living
meme. Definitely, I fixed it and now everything is on fire around you. I definitely thought
that. Anyone in the industry that is passionate has felt it and you can feel like
Cybertek people it is not a us versus them but a lot of times
people are more organized and articulate or maybe more
outgoing. I am a introvert. I force myself to do it.
>>Great perspective.
>> Circling back to ATT&CKcon, this
worldly very view where do you see ATT&CKcon
growing? How can we, as ATT&CKcon adjust these problems?
>>Last year I did a talk on a great hybrid analysis
trying to figure out how they were attacking things. I was
interested in doing the same thing. My talk may have come
across is criticizing them. I really was not I was just trying
to see how people are perching them. You do that stuff in a
bubble and are guilty of it but I think the great thing about
ATT&CKcon is it is community driven. Miter owns it somewhat
not really but it will get the attack we deserve. Sometimes you need
all the ideas to converge and we will find out the right way but let’s not be
afraid to pursue ideas when they do not necessarily match how we
used to do business. The way we use to do business frankly
hasn’t worked. >>That aligns with the keynote
last year. If you want to go fast go alone but if you want to
write good together. I may have botched that.
>>I think he did Jupiter notebook last year. Use pandas instead of
Excel war what are you doing?
>>Thank you for your time. That is the only time we have before
the next session of ATT&CKcon. We appreciate you guys logging
in ATT&CKcon will be back in a few minutes.
>>[ music ] ladies and gentlemen, ATT&CKcon
two point continues with Katie Nichols.
>>Welcome back. Hopefully folks online had a chance to grab
coffee. Superuser on twitter said she is an alternate and looking and listening in
from Germany. All around the world folks are listening online
as well as here in McLean. Hopefully you all enjoyed that
break. Thank you for Cisco map. A bunch of more
figures and great content. It is fun for me to introduce the next
figures who return from last year. As I was chatting
with José that was his first ever public talk. That launched
him on a year-long speaking career to Poland and becoming a
internationally recognized speaker. One challenge we often
have is getting access to the right data as we are trying to
create analytics. Play and 7 have done a lot of work on this.
Roberto is now at Amazon so much open-source and that Hunter playbook and has a
is a student at Northern Virginia community college and
has a background in business and data science. Which brings in different
perspective across different fields. He is active in Twitter
and open-source communities. It is my job to introduce Roberto
and José Rodriguez.
>>Can you guys hear us?
>>Thank you for coming for our presentation. We are talking
about an exciting project help us back to data sources and
analytics as well. >>My name is Hosea José . Expect we are from
Peru and also brothers. [ Laughter ] [ Applause ] we are
the items and contribute as of a lot of projects you see over
there and we created an open organization
to put everything in and start building a community. Today we
have a lot to talk about. A lot of material from exploring Tech
Data sources to 2018 and today all the way to using the right project we
are happy to talk about to start contributing to MITRE ATT&CK as well.
>>We would like to start our talk given you an idea on how to
query all the information we have in ATT&CK . The way we
would like to do this is using the server that attack has with all the information available ATT&CK CTI. It is the two libraries
center created. If you want to install it you can do it
directly from the source. Also, if you want to learn more from
the library you can go to the link that is below. There will
be a lot of resources to help you start understanding the
library and using it to query the information.
>>This are some of the functions you can find in the
library. If you want more information about each function
you can go to the lead below. Today we are using a couple of
them in a short video.
>>Basically we need to import our library and [ Indiscernible ] to
allow us to start working with some options. Techniques that
will give us all the techniques and ATT&CK . We have 535 techniques
and before we continue we need to remove all the techniques so we have [
Indiscernible ] and at the end will start work and start
explaining more about ATT&CK . Something else you can
do the library is follow the same steps to import the
libraries and start working with the information. This is the first time we are translating
the framework to Spanish. [ Laughter ]
>>You can see that it is cool to run
the Python library ever and see you things we never thought would be
translated that way. >>Using this library we can
start exploring the last updates the ATT&CK team released a couple of days ago.
We know we have 519 techniques. Some of them have data sources
and others do not. 49% of the techniques do
not have data sources. If we display this information by
matrix you can see that in the enterprise matrix we have one
technique without data sources. Last year we had four techniques
so that means [ Indiscernible ] attack and now
we have a more . Expect you can go back this weekend and started
working on that loan if you want to contribute. Right there is
the one you need. >>From a data source
perspective [ Indiscernible ] [ Indiscernible – heavy accent ]
is still our data sources that core the framework. When we presented
the methodology to Matt ATT&CK data sources all the way to the
event loss for modeling data and documenting events we find relationships between a DAC
data sources and event logs and we start to call these
relationship sub data sources. This allows us to
start building a document which is a Google doc where we start documenting and modeling
every event log we collected in our environment. You want to get
the file and learn more about it you can find it at the link low.
>>There is a lot more to do which is something we started noticing. Now we wanted
to know what is triggering all of these events? What are the
things we can do to start mapping things to an event? Not just an
event back to a data source but back to API. From a security
perspective we started doing this graph where we would have
events mapped to specific APIs which is cool to start thinking
about data sources we are collecting after the simulation
or how we can plan A potential simulation. This project started
the API to even project as part of the [ Indiscernible ]
project. This does not cover everything but some APIs map to the Sysmon events. It was
also cool to start mapping other events.
>>Last year we started documenting and analyzing all
the data from ATT&CK . We found a few opportunities to
contribute back to the ATT&CK team since data sources are
covered by other data sources to the validation of event IDs
recommended for ATT&CK data source. What do we mean with the data sources
covered by other data sources? This example you
can see the technique. You>>Next question, what do we do
to validate the recommended sources? We can
start doing simulations to map a couple of things that get
generated back to the data sources. We go beyond this testing your ABR
triggering. We are talking about learning about the adversary and
trying to map data sources to specific actions and to validate the analytics you
are already paying for or getting automatically. This
might be familiar for some of you. Got away into youth accountable
documenting what it is that happened. If we apply this on
registry for example MITRE ATT&CK
provides a lot of information you can start using to getting prepared. There is a lot of
things you need to do besides that. The policies you need to
enable to make sure you have settings enabled as well. And if
you’re talking about Windows registry especially in security
event logs you cannot just enable him expect every registry
to trigger. You have to create auto rules for everything you’re interested in. What it is we are
looking for credentials in registry. Let’s focus on the
default automatic logon where you use three commands to enable a user with specific
credentials to automatically log on to your system. How do you
save a auto rule for this? There is a project we’ve been together. Use these data on
a roll function to set up the monetary rule for a query for that
specific register. Commands, this was straight from the MITRE
ATT&CK framework . You have people telling you to cover more
of that. We need to have at least three. If you feel you’re
being advanced let’s do she’s sharp and Python. A does not
change the behavior but market as variation. We are ready? Not
yet we have to standardize undocumented. If you do it it
will not have [ Indiscernible ] you generated
and ready to start validating analytics. Then you
start freaking out. The reality is the process will have
specific events that show up. If it does not matter if it is
Python, or commands it will pretty much
generate a lot of the data sources that MITRE ATT&CK recommends. If we can start validating but
it is the technique is recommending for us, it does a great job of
telling you that you need Windows registry but MITRE is also
something you can do on top of that to start validating what
you need from a Windows registry [ Indiscernible ]
>>We started finding opportunities where there is a
lot of things we need to do. At the end of the day we
are generating similar data. There is a lot of people doing
this in the world. A lot of teams. Everyone doing the same things. There are a lot of
things happening but we keep generating the same data. Right, so what if we
can share our data sets? Grab it and share with anyone else, at
the end of the day we can take a lot of teams that do not have
people that have the expertise and ready to jump in data
analysis so you can do [ Indiscernible ] everything comes with a lot of documentation. I believe this
is key. I want you to know why certain events trigger. I just
want to give you a document and say good luck. There is actually a
lot going on that you need to understand we take all this data and send
it to a collector and collect the data and send it back and we
can use the start to collect the data online. We have nether environment thank you, Ruben, you can tell
the implant input. They enable providers and they
subscribe to those providers and then a collects in your writing me Tom. The specific channels
for log is pretty much not set up and not meant to practically
send events to the event log. We apply the same
concept here. You may be asking yourself how do we collect the data? We use [ Indiscernible
] as a tool to import and extract data and send data back
to a server. If you want to take a
snapshot are you have to do is follow these commands. C is for
consumer mode and batch ON is start collecting and I will show
you a video on how that actually happens. Several making
up everything to a file it is already putting
everything to a file at the bottom. I just run a command, my credentials in
registry command and at the end I can start validating that it
would generate data as well. Now I can see the
password in there. That is a successful testing of my
specific simulation. If you go to the Mordor documentation you may
ask yourself is winning. Gustaf is my dog and when he is
his girlfriend. That is it. How do you get data back? Run a
similar command with a -P four producer and a -L. Run these three commands
and you will get every data set and it is a file format in your
system. Simple as that. At the end of the day
we went from spending a lot of time producing the data all the
way to start analyzing the data and start validating our
analytics. That is as easy it so you may be asking
yourself what we do with this? I have a lot of data and I’m
anxious and I do not want know what to do. Mordor and Carlo
together a little bit. You have specific analytics this one provided by Tony
Lambert. You can start interactively using test manager
to get the memory contents of [
Indiscernible ] . You may ask yourself how you some light . I did the minister logon and
first I will check that I am in a RPT section and then I will start doing some opening of the
task manager. I am very slow, sorry. I will right-click and
dump file and that is pretty much it. Car analytics gives you some
analytics that you can start playing with. They provided
these for file creation ? I love to use Jupiter
notebooks. Just think about Jupiter notebooks as a file to
save input output visualizations and the notes. It will tell a
story about your analytics. The way it works
is it has the kernel and client infrastructures where it gets
input, evaluates the impact, and get your data back. This is what
will look at as an architecture. The building
about Jupiter is the kernel is not a rating system kernel it is
pretty much the analytics piece that will execute things. The
beauty is you can have anything. It is just
amazing. Why don’t we take this and put it into a Jupiter
notebook on the hunter’s playbook. That is my dog by the
way. I love my dog. We’re going to use a
library created by José that will allow statistical analysis. Just to
let you know why we’re using that in there, and the way it
works is we will grab this specific notebook . We have some tags and
descriptions about the specific analytic that I would like to
document. I want a specific format to tell you
about the story of my analytic. Some data sources and data sets
from Mordor I’m using to validate the analytic and the
star now process processing the data set. You will have this
notebook as well. Grab the data and put
it in a way to start running queries. How I do this is I make data now . I have more analytics I can
start running for example. Starting to do processing where I’m looking for
specific DLLs where the APIs are used. Pretty simple stuff. You have extra to three
analytics in the notebook where I can look for an example
without task manager. Show me any binary it is actually
affects. And I can join stuff. >>[ Captioners Transitioning ]
>>>>I can actually see that
someone is running an RDP session map to a process
accessing with specific data. We can see the results in there. So
this is cool. We took a dataset and started
playing with it. We are building a community around this. This is
the public channel that you can access and get a free and fight.
Something that I am doing is actually giving you a modern
dataset and a notebook. This is a project called binder.
It allows you to put together all of the services that at the
end of the day what it does is creates an open infrastructure
for you to use with Jupyter Notebook. You can take the
playbook and that is going to pretty much create your own
Jupyter Notebook server and binder and you can use it. We
can pretty much do something like this. If you are streaming
this right now, go to the playbook and scroll down and the
video starts. There you go. Anyway, just go to the specific
link in here and I don’t see the play. There will be a binder
link and you click on it and it will pretty much stand up. And
Jupyter server, Jupyter Notebook as well and Mark the
specific analytics mapped to attack. They will extend the
analytics portion of that. You will be able to run all that
together and pretty much a Jupyter Notebook in the club.
You can use them for free and the threat Hunter playbook has
that. If you are live streaming click on it and you might break
binder because her is a lot of people in there. We are doing
this for the community. We love to empower the community. We
want to do it together. All of the things that we build our
open source. We will give it to people do it that don’t have the resources. You can get all
of these data sources and stuff going on for free through your browser. I can actually
reach out to any part of the world to write the analytics and
validate what we are building with real-time queries and also
data sources we build as well. These are all the references
that we have. And before anything, thank you very much,
and thank you for the conference and letting us be here.
>>[ Applause ]
>>I do not think you were going to make it. Awesome. I will have folks talk to you over
lunch and over they break. I assume you will be here.
>>Please join me in thanking Roberta a and José.
>>[ Applause ]
>>We are getting lots of love on Twitter for those jerseys. The project is
off one, awesome. Thanks to cyber
panda. Look at these guys with the team Jersey at awesome slides. Awesome tweak here. They got me started
with ATT&CK a couple years ago. They do some of the most
interesting work. Super exciting talk at ATT&CK.con . Lots of
love on twitter for those folks. As we continue with this theme,
how do we write these analytics if we do not have all the data
we wish we had? None of us have perfect
visibility. If you look at the data sources you do have, you
can start to prioritize. How do I get something for my buck?
Going back to that theme of prioritization weather threat
Intel or data sources that prioritization theme that we
keep hearing. Please join me in welcoming Keith McCann and.
>>[ Applause ] >>Thank you very much. All
right. Thank you well. Thank you very much to
the entire team for having us here. It has been a super fun events just getting to know
folks over the last couple of days and having good
discussions. Following the last talk, this will be effectively
like your freshman level introduction to data sources.
The stuff you just saw will definitely be super useful for
digging into a lot of things you encounter and the really hard
operational challenges that relate to data sources. So, I’m Keith and nice to meet you all. By way of background
introduction I’ve been at this for about 20 years in the security industry. As I was
preparing for my talk this morning which means I was
reading twitter, I came across a tweet from Jeremiah Grossman who
is more useful to follow than I am or pretty much anyone else.
He posed the question this morning which literally as I
pulled up the app sitting there in front of me and it really
resonated. He asked if there was one thing you wish you knew like
a piece of advice and I’m paraphrasing, what was it? And he provided a couple
of answers of his own. One of those was to focus a little bit
less on what is possible and really think through what is
probable. And that really resonated. It is really easy
particularly if you’ve done work for the government or other
places like that. You know what is
possible. A lot of these things have become public knowledge
now. We have a great appreciation for what really
high again adversary tools and techniques look like. It is
really easy to get mired in like okay, if I go do these things, there is someone
that will still beat me. If that is your threat model, you can
rest assured that someone will always beat you. Just tried to
think through how to get started. When I thought about
what we are here to talk about and learn and share, a lot of it
just for the little bit of context as we’ve
gone through the last couple of years trying to build a
community around ATT&CK. Trying to think through
how do we take this massive body of knowledge and make it
approachable for people. How do we figure out how to get
started? More so than anything else the subtext behind this and
I think the point of the next 20 or so minutes, it is for people
trying to figure out how to get started. How do you take this
body of knowledge we have compiled like describing things
that ATT&CK want and how do you dip your toe in and what is one
of the first things you can do. Thinking through this in the
context of the text in engineering which is not the
focus of the talk, but is an interesting way to think through
how you go from data sources to protection. You have three
things you need to do. Make sure you are collecting the right
data. Having worked for many of these decades for a couple of
decades now alongside folks doing really great forensics
work in addition to security operations and other things like a principal or
a phrase that is stuck with me is, you cannot go back in town
to collect evidence. I see Richard sitting out here who is
a pioneer of thinking. Not waiting until you need data to
go and try to find it. Collect as much as you can as early as
you can. There is a balance you have to strike. Once you collect
the data you need to ask the right questions. This is the
work we are doing inside of ATT&CK and it is valuable. What
questions might I need to ask. What are the things that
adversaries are going to do so I can formulate questions for my
dataset and then answer those questions. If you ever purchased a product in turn
the detection to 11 and then you turn on the MITRE ATT&CK feed, you’ve got
exceptional visibility but now you’ve got 10,000 questions that
you need to go answer. For most teams that’s not achievable
either. So striking some balance, what does this have to
do with ATT&CK? Heat maps, there’s a lot of fixation on measuring detection if we have
proved anything in the industry at least for as long as I’ve
been in there, objectively measuring detection protection
prevention or however you want to think about it is exceptionally difficult. These
things are very context based and detecting a thing or
observing a thing or one environment doesn’t correlate in
terms of scope impact severity when you look across the whole
bunch of them. We love to fixate on building heat maps and things like that and they are valuable
for some purposes. It is easy to skip over the fundamentals like
am I collecting the right data. Especially if you’re just
getting started. Whether it is because you were new to the
industry or because you are a team of one trying to build a
security program and a company that doesn’t have one and trying
to take that first step can be daunting. So we cannot detect things that we
cannot see. Observability comes before everything else. If you
ask questions of your data and you answer those questions, if
you don’t have the right data to begin with, everything else is
moot. So these are useful for a couple of
particular things. I think the two that I always come back to
our being able to assign value objectively to data and
therefore tools and solutions you implement. And then
measuring progress and coverage. If there is one
heat map to rule them all that I think is exceptionally useful as
a way for people to think about how they are making progress in
their program and whether they’re spending money on
solution implementation tools it is that you want to make sure above all else
coverage continues to improve in a thoughtful way. Asking
questions of data and operationalizing that coverage. It can be
challenging to various degrees and at various points in time. But making sure that you are
thoughtfully collecting the right data and that you
understand that cost and you understand those trade-offs
first and foremost is very valuable. A couple of obligatory
nods. I will start with the ones not listed. Tire team at red
Canary who do most of the great work that guides my thinking.
Olaf and the Rodriguez brothers. A couple of things I have used
like resources, people I have followed in work that I have
enjoyed following overtime. Things that I used to help think
through other ways to look at or value data sources. Obviously
the ATT&CK team for putting this all
together. >>This seemed like a neat idea
coming up with the title for this talk months ago. I am back
to having to find this thing. This is an open ended concept.
This is not the point of the talk. Going back to how you get
started and the context of Jeremiahs question and the
answer he provided this morning, it is being in opposition to
detect most threats most of the time. This is not where your
program wants to wind up. I am not suggesting you go try to
figure out how to detect things and then declare victory. This
is a really useful way to get started. It is a useful forcing function for
building a solid foundation for your program. Extending that a
little bit like a few principals I wanted to lay
out as we talk through some of the more mechanical pieces of
this these three things are the guides that I keep in my
brain all the time and the words I try to remind myself as I am
thinking through. We need to go off and do this thing. It does,
does it maximize coverage? Thinking through the one heat
map I come back to in my brain, do I have visibility into a
greater percentage of techniques than I did yesterday from an
engineering perspective? Minimize complexity. This is
probably not the talk for those of you working at an institution
that has a huge team and 100 tools. This is the talk for
those of you who have a small team, you’re under resourced and
trying to figure out how to take that first step and build a
solid foundation and built great momentum. And optimize for
answers. At every step along the way as you are starting to hoard more data that is awesome. As
you formulate analytics to apply to that data which is a great
exercise to go through, at the end of the day if there’s a huge
gap between great analytic coverage and doing detection.
You need to figure out which of those questions you have asked
and those in guesstimate of leads you’ve which of those
matter? There might be 100 that matter and 10 of that matter more than the rest. You
had better get good at figuring out how to get from point a to
point B if you want to achieve good outcomes. So keep these in mind. So we have glanced over
the slide and think about the data sources as the linchpin.
Without these, everything else we want to do is kind of like a secondary discussion. I had to jam these in at the
last minute and these continue to change every time as the
Rodriguez brothers were giving their talk I had a moment of
panic that I got the numbers wrong. It turns out today I’m
only up I want. 48 hours or 72 hours old. The numbers aside,
there is great tools for going and interrogating this data. 59
data sources and this is just an enterprise attack that spanned
the 266 tech makes I’m assuming the website is right and I’m
not. Useful for understanding how we observe a given tech make
is the purpose. There are some once you get into this fantastic tools you can see in
places like that are extremely useful. I was watching that talk
and those are the ways you want to address these once you get
into operation. Do you need one or all data sources to properly
observe the tech make? Windows event logs are good examples. You may only need
one of the 10 data sources inside of your environment to
observe that tech make in a way that informs you well enough to
develop an analytic and get to an answer and perform detection.
They are not clearly defined. One of the principles is to minimize
complexity and when you look at all of the work that goes into
ATT&CK refining and continuing to evolve this matrix M body of
knowledge it is not super important we have these things
perfectly defined. It is important that they are there
and we think about them. Let’s talk about where we start. There
are four ways, a progression I think I have at least gone say
which is been my mental journey and I know many folks on my team
and a few others I’ve talked to this week. These are the four
steps that are high-level. Understand prevalent. Which
things occur most often. Focus on the class of data or product.
If you’re just getting started, maybe the decision is easier
once you get further into your program. Now you were trying to close gaps. Your decision the
first time you go through this process may be completely
different from the decision are the conclusions are the waiting
you assigned to these data sources the third or fourth
time. That you differentiate and you will overlay what we know
and things we have heard and skip over the Intel parts of
this. We have done a good job of helping everyone understand why
it is important we share operational data and insights so
whether you’re just getting started are trying to close
those gaps and go from 80% to 100%, where ever you are it’s
important to overlay operational context and understand how and
why these things happen in the real world. Understanding
prevalence is simple. This would not be a
con talk if there wasn’t a typo. So hunters forge. This is
a little more superhero than it is Lord of the rings. At any
rate, this tool is only providing this because it was a
tool like came up with some time ago to answer some of these
questions for myself. I’m not going to spend any time on it. I
use it to make numbers. You need to re-create those numbers. Top
data sources by prevalence. I think there has already been a
good bit of discussion about this but this is it. These are
the top 10 data sources in the ATT&CK dataset as it exists
today or 72 hours ago. I am assuming they haven’t changed
much in that time. That is pretty straightforward. Let’s
think about how we take that and focus on data or product. I
tried to group these data sources based on the types of devices collection platforms
that they will come from. You’ve got some end point, network,
some things that can come from either like net flow and stuff
like that and that is down to what type of coverage hereafter.
We have cloud as of a few days ago which is now screwing up all
the numbers I put together for this deck. So then you have mail
server logs and Web server log specific to types of
applications and they are materially useful depending on
the techniques you care about. Taking those in understanding at
a high level what types of things am I going to collect
data from so that I know what solutions I need to move this
needle for work. If you just take that top 10 that I showed
you and look at where those come from now it becomes a little bit
more clear. It will come as no surprise to any of you although
having spent most of my career like packet monkey went most of
my time working with them point data. This is not a
suggestion that you should do one or the other. Again thinking
through how you start to make the next step forward. Let’s
assume for sake of argument that you are digging in to closing this 80%. Now you’ve got to
differentiate within a class of product and this is where things
get interesting. You need some data from your end points and
what the heck do you do? You’ve got a couple of solutions here. This is always
like a super interesting conversation to have with folks
that are trying to implement an endpoint for the first time.
Which of the things available to you do you start with? There are
tons of choices. Open source stuff, you’ve got all of these
providers and you’ve got a variety of ways. And then you’ve
got commercial products. In thinking through how to process
that for purposes of making an initial decision, you have found
it useful to bucket them into one of two categories. There are
solutions that are visibility first meeting that they are
optimized for collecting data consistently and these data
types filled in are optimized for collecting that data all the time. On the other side of
that you have protection for solutions. These are things that
their first job is to defend the endpoint and they also try to
accounts for collection of data that’s useful for hunting an
investigation and things like that. Their primary purpose is to prevent
bad things from happening first and foremost and then they are
fulfilling these investigative detection use cases secondarily. And so, that is cute. It is a neat
picture. You’ve got to figure out that the point of this is
putting some data behind this. To give you a basis for making a
decision. The way I ended up doing this was just to say for a
visibility solution which could be a commercial solution or something like where you take
the thing you already have in a Windows environment and flip it
on. Enumerate the datatypes that are collected always from most of
the products or solutions in those buckets. You can see here
for visibility solutions open source or commercial these tend
to be the datatypes that you’re going to get very consistently
assuming you do not monkey with the configuration. On the
protection side, if you think about the data sources at the top of
the list, they very clearly optimize for collecting those
all of the time and you will get other types of data from them
but you will get it selectively. Those agents will try to decide
when a process or behavior is interesting and then provide you
with additional data in those cases. This tool makes it easy
to measure this in some way. If you take those sources
effectively for protection meaning effectively any product
to purchase to do endpoint protection that also does data
collection is going to give you process monitoring and
commandline monitoring. Those are the two elements and that
gets you 70%. That is pretty good from going from nothing to
something and being able to cover down on 70% just by
throwing almost any product out there that attempts to meet
these use cases is super valuable. This is not product
talk but it gets you up over 80 and that is awesome. For the
first time a few days ago I thought I would take a look at
the modular in the data sources are provides consistently which
is 85% almost. For something that
will cost you nothing. Other than just your time to turn it on and again we’re not
talking about building analytics or operations or the in state of
detection. We are trying to figure out what moves the needle
on collecting data that the attack
matrix says might be useful. Observations and putting
ourselves in a position to observe the things that matter.
So again, we are three quarters of the way through this. The
last piece you have heard a lot about. That is overlaying the operational context. While it is
need to build a great foundation and now you are collecting the
right data and while it is neat to have some objective basis for
assigning value to that data or make an implementation decision,
now you’ve got to take advantage of the things that we have
learned from putting this into practice and for the folks
that are built analytics on top of this and have gone from
asking thousands of questions of this dataset to finding a
handful of things that matter and you need to respond to. Like
taking that operational context and overlaying it. This is the top 10 from a report
last year. It is showing having a team that
has built thousands of analytics and looking at those things and
answering those questions. It’s a bad? Is it not? And escalating
it to an enterprise where they can respond. These are the top
10 based on prevalence. When you apply this you take it and
overlaying it with the data source prevalence information.
You can start to make some of those hard decisions with
respect to features or functionality of tools. Because
you can’t afford to collect everything. Not everyone could
just turn it on and assume you can operationalize that. You are
going to make trade-offs. Security and risk management
welcome to a career of making trade-offs. Super exciting and a
great thought exercise. Again, just taking this stuff and thinking through
how and when you would overlay that in your decision-making
process. You can look at trends I think it was asked indirectly
ever doing that again and of course we are. This is an
interesting way to think about those and more so than anything
else now that MITRE is collecting sightings that at
this is why this becomes critical. Like
prioritizing data sources and practice will be different for every enterprise. And how those
trends change over time is going to be different as an industry,
enterprise, industry verticals and things like that. Power
shows across the board. We continue to see that. And then
you look at stuff like this which is interesting when you
think about the implications of what you collect and how you use
it. Windows admin shares and remote file copy in the last 12
months just like this. Does anybody know who to thank for that? It is the great
destroyer of enterprise in the last 12 months. When you think
about data sources and operationalizing those and
prioritizing how you respond, these are the
things that once you get over the hump of collecting the data
you want, now you can start to take data from the community
whether it’s from reports from whomever but more importantly things that you
learn inside of these communities and by sharing
information with one another here and elsewhere and applying
that context. I walk into pretty much every threat intelligence
discussion with the position it’s always phishing . Thousand attack
techniques. How breaches happen and it is always
phishing which I realizing I’m oversimplifies in. The important
take away is again reinforcing what you’ve heard. Everyone here be thinking whether it’s on an individual
basis in a slack community or wherever. If you are a large
enterprise in a building analytics and have this data and
this resource and if you’re a service provider very much like
net MITRE, they have a way to do this and maybe for the first
time ever we have a way to share useful threat intelligence
in a way that scales and does not betray privacy and things
like that and concerns we have had when we try to
overcomplicate share domains. Everyone be thinking about how
you can contribute whether it is
thinking about new ways to prioritize data sources, assign value,
getting into the details of implementing a particular data
source and the attribute of those that are interesting.
Arches sharing operational data at a high level. What outcomes
did you actually achieve? And getting back to principals and
keeping the spirit of this is maximizing coverage
and minimize complexity when ever you can and keep optimizing for
answers. A lot of the stuff you’ve seen over the course of
the last few days and hopefully you can think about data
collection and the value we assign to the data we collect is
a useful first step. That’s it. Thank you very much. I
appreciate it. >>[ Applause ]
>>Awesome. Thank you so much, Keith. I’m sure Keith will stick
around at lunch shortly. Thank you so much for that talk. I was
struck by a lot of themes there. We have been trying to help
folks try to figure that out and I appreciate the throw back to
yesterday and the ATT&CK sightings we talked about . Folks on twitter
appreciating the talk. A quarter I really liked, don’t think about
what is possible, think about what is probable and that is the
focus the ATT&CK takes. Folks on twitter, folks in Poland are watching live it is here.
And Casey Smith are showing support for his friend Keith
summarizing key points from that talk. Minimize complexity what data do
you need to answer questions that are interesting to you and
your team which to me sounds like Intel requirements. Thank
you so much to Keith for that back to basics talk. It’s my
pleasure to introduce another one of the ATT&CK team members who goes back to her
theme from yesterday talking about controls with RCI yes talk. We focus on adversary
emulation and he has a military background and will
talk about the work he is done and community members to talk to
about taking these controls or mitigations and map that to
these attack techniques. It is not always easy. A lot of
different folks are doing this sometimes separately. So that
they met brigade people who word doing things in disparate ways together. Join me in
introducing Mike long.
>>[ Applause ] >>I will go ahead and get started. Thank you for the
warm introduction. For everybody else I’m grateful to be here and
share this information with you. Today I’m going to
share an update on one of the many projects the ATT&CK team is
working on and supporting. It involves mapping various
information security control frameworks to ATT&CK tech
exhibit gave years. We will provide an
overview of this project and talk about specific challenges
we are trying to address and share prototypes we’ve created
and then show you some future goals and plans basically how we
want this project to go going forward. As I look at the clock
it’s not lost on me that I the last thing standing between you
and a great lunch. I will keep this moving.
>>Fundamentally what kick this off is the fact that many of the
organizations we support are required by policy, law or best
practice to select and implement information security controls
documented in various publications. PCI, DSS,
and of course our CIS controls.
We’ve got all of these different controls to provide guidance and
one of the commonalities is the mindset that when we are picking
controls or deciding how to configure and implement them, those decisions
should be driven by knowledge of the threats targeting our
organizations and vulnerabilities found within. We
will talk about examples where ATT&CK can help. Many organizations still
experience similar challenges. Some of them are vast in scope.
Estate hundred 53 would you expand the controls implementations and enhancements
you’re left with thousands of controls. It is easy for people
to ask which control should I select and wife. I have seen
organizations that elect for the brute force method. They are fortified but
at what cost? We have other organizations that have been
dealing with these controls long before ATT&CK existed . They are
wondering how do we better integrate ATT&CK to processes
and technologies. These are problems we have been focusing
on and we have talked to a lot of organizations taking the
initiative and they are making their own attack control
mappings. We will show you some of the cool use cases you can
derive from these mappings. The questions
for us as the ATT&CK team is how we curate these mappings so we
can make them available to the Global cyber security community?
That is the premise behind this project and we will show you
different prototype mappings we have made along the way and
challenges we have encountered and how we hope to address them
going forward. >>We will start by showing an
example. We talked about these mappings but what do they look
like and how might they be helpful? This is an excerpt
taken from a prototype mapping we mad, made for strategies to attack tactics. You will notice
the mapping values were derived from the nest. We can make useful
analysis. You might start by saying which of these controls
can help met a great, mitigate threats. Many organizations have an
abundance of protective controls but they might be asking what is
our effectiveness? We take a mapping like this and we can
walk down and ask thought-provoking questions. I
we continuously looking for network intrusions? I we working
with their defenses? As you go through this workflow you will
likely identify capability gaps which can help inform your roadmaps. Likewise they will
give you an understanding of your current coverage and that
can be useful for building follow-up assessments. Measuring
the effectiveness of your existing controls. In the grand
scheme of things these are a tool we can leverage in order to
improve our understanding of our cybersecurity effectiveness. I
will share with you another prototype mapping that was
developed in response to a strong community request showing
how attack maps to the controls in NIST 800-53. We made
a lot of different prototypes. Through this process we have
identified a number of challenges. I will share one
of the prototypes we created and talk about how it works and what
kind of limitations are present like most products we started
with the spreadsheet. On the left-hand side we have our
ATT&CK tactics organized by tactic and on the top side we have our NIST controls. One of
the immediate challenges that the spreadsheet is extremely
large about 244 cells making this a little bit unwieldy at
times. Scrolling down we have 266 ATT&CK techniques. So how do we deal
with the fact these frameworks are going to continue to grow and expand? Another approach was
dividing this by control family. It is a little bit smaller and
easier to cope with it at this point it is an Excel spreadsheet
and we need to find a better way to cope with the scale. That
brings us to the challenges we derived by creating these different prototypes. If you have made these mappings
I suspect you have experienced similar challenges and this is
the basis for the solutions we plan on developing for the
future. So we mapped to the different functions of the NIST
security framework. We want to have criteria that is
very clear and there are defined conventions so if we are sharing
this it’s easy to jump in and make contributions following
known standards. We recognize the fact that both ATT&CK and
different frameworks will continue to grow and expand. It
might not be enough to grow these in an Excel spreadsheet .
We might need a structured data solution that can keep up with
the changing nature’s of these control frameworks. And then
there is scale. Hundreds if not thousands of controls. We need
to find a programmatic way to deal with that. And then the
last thing I will share as we have seen many organizations
with mappings and window they are probably of great value to
the community. The challenge is sharing it. You’re talking about
your security posture and there might be sensitive data you
don’t want to reveal to the public. Going forward we want to
find a way where organizations can share these mappings and we
can make it available and perhaps MITRE can be the curator
to make that happen . We talked about some of
the different challenges and showed some of the prototypes.
Were going to talk about what we can do to get this into your
hands in a timely manner. Our in-state is we want to provide a
curated source of trusted mappings that can support. In many ways
ATT&CK has been a success because it is driven by the
people who use it . Beyond that as far as the technical approach
as we go forward we will develop a flexible structure and I look
to some of the other structures we’ve seen. We can take a large
body of different criteria and put it in a structured format.
If we have that it should be more scalable than an Excel
spreadsheet of doom. And we have this format
that should be easier to make that data accessible in a
user-friendly application. That would be the ATT&CK navigator.
It could be a separate entity but we want to get these
mappings to you and we wanted to be an easy process to use them.
That brings us to our conclusion. The first thing I
will point out is that this project is in its early stages.
This is the perfect time to offer your input, ideas, and perspectives. This is a
subject that people have strong ideas about. If there is
specific mappings you want, let us know and why and that will
put us in a better position to try and make that a reality.
If you are one of these organizations that has taken the
initiative and made a great mapping and you want to share
it, reach out. We are happy to figure out how to get it out
there. And this underscores a common theme at ATT&CK.con. But
we were together we gain advantages over our adversaries
. Only by working together can we make this a reality. So let
us know if you want to help our happy to have a discussion about
how to make that a reality. That was the controls
mapping update. I’d like to open up the floor and answer any
questions you may have. >>The most common question is
what are they serving for lunch?>>I know this is something a
lot of folks have looked into. Are there any questions for
Mike. >>If the wrong questions now I
will be around. Please reach out to
me and I will be on-site and likewise if you have any further
questions or requests feel free to reach out to us at the
information below. >>You mentioned the challenges
slide but can you talk more about the challenge of a want to
many thing but the challenge of the fact that it depends on how
you implement a control. What were you finding as you were
trying to map these controls?
>>That is one of the big challenges. It is easy to look
at an Excel spreadsheet and say if we have malicious code
protection that would lock spearfishing attacks. It really
depends on the specific implementation and a lot of variables. We want to offer
guidance to help organizations and respect the fact that the
effectiveness of these controls depends on your implementation.
At the end of the day this is a starting point to understand
your environment recognizing it does require deeper examination
to understand the effectiveness.>> Well said. I appreciate you
bringing that into perspective. I don’t know if this is actually
going to work maybe not so much. That’s another theme we have talked about. A false sense of
security and that is a risk here. A risk of doing the
controls mapping that as we as a community look at this and keep
that in mind it’s important and
hopefully we can strengthen this to move on together. Please join
me in thanking Mike Long. >>[ Applause ]
>>Excellent. Some love from twitter. Here’s a screenshot on
how to map attack to improve the
understanding of our cybersecurity program
effectiveness. Also my covering the critical topic and most of
controls are autofocus for regulatory reasons. We know that
doesn’t always equal security so that we should map ATT&CK to
them . Bring together different communities here at ATT&CK.con.
That brings us to lunch. Thanks for sponsoring
that and we encourage you to keep talking to each other and
visit our exhibitors and of course, folks online, he’s back.
I will send it over to Jamie Williams and see you all at
lunch. We will come back at 1:15 eastern. See you all soon.
>>>>Welcome back to the attack on couch. I am here with
the proud father and probably still a great artisan great
photography who spoke about ATT&CK coverage. >> I’m getting a lot of new
ideas in the training on Monday was excellent. It is great to
see so many people are looking at ATT&CK from all these
different ways. And also to think of new ideas on how to
apply a properly and help my clients and other people in the
community. >>It is an honor for you to be
here. So CTI and data. Your talk was
interesting because you flip the script and let’s not talk
about our adversaries but look at ourselves and understand
ourselves. What is the importance of understanding the
data you have internally? >>It is super important. Like
he’s highlighted where people are always rushing and I need to
cover this but in order to cover it properly you need to learn
first how it applies to you and what kind of data you already
have and what you can utilize. And also knowing in your own
environment is totally different than probably every report that
you read out there. It applies to base standards differently
and there is a best practice but hardly anybody adapts that. It is really
important to know yourself first before you start looking at what
is out there. >>Excellent perspective. ATT&CK
is something we build and maintain. Really tease out all
of the values. One of the really interesting and novel things you
did was take data sources and apply weight. ISA behavior has
like 40% this and can you speak towards that process and how you
came to that conclusion?
>>Some of them are guesstimates. I don’t see all of
them being applied. Basically how me
and my team and what I see and tight reports prevalence of the
data and where I see the most value in terms of being able to
detect or have a probability rating. So where you would be
most likely to find this. As I said it’s not an
exact science so there might be some points that might differ.
>>Even if it’s not perfect, it is a step
towards understanding yourself and really knowing that I’m in a
better place than I was before. But in that same vein three
interesting metrics you called out was completely runs,
availability and timeliness. What do you mean by those?
>>Sure. Basically the timeliness for instance you
have an environment with a lot of laptops and they go abroad
and they come back and all of the data flows into your
environment. Most of your detection logic is based on the
last 15 minutes of the last hour. It looks at the generated
timestamp and not necessarily at the adjusted rent. If it’s been
generated an hour ago you’re still good. If it was generated
last week while your colleague was in Africa or wherever, you
won’t see it anymore. So that is why I factor that in. And the
same goes for the availability. Do you store it locally and the coverages
mostly how many systems do you cover and how good is the
quality that the data provides you. Is it usable? Is it one big blob and you can’t really do
anything with it? >>We have all been there.
>>Out of curiosity, have you introduced any other metrics are
those encompassing? >>I probably would ask or would
add some more depth scores. I wanted
to keep it as usable as possible for everybody out there. Having
to fill those metrics may be cumbersome for the ones that are
there. If you really do that for everyone and for every one of them, it might not be a
usable tool anymore. I welcome people to add to it if they see
value in it like parsing and all of that, but I want to keep it
as flexible and easy to use as possible.
>>We definitely see you as a thought leader. What is next for
you? I think you mentioned graph model assessments.
>>I will continue work on the threatening app and I see a lot of value and
I get so much amazing feedback from people that I will
definitely work on that. And also the graph model. I got
inspired by a colleague who started this entry basically and also the [ Indiscernible ]
guys they’ve done an amazing job with that application and I want
to see if I can combine that into an application where a red
team or a blue team can put in small metrics and see how ATT&CK
can help them in defending the network or protecting it.
>>Anything we can do one art end we would be honored to help
contribute to that. Are there any plans for
Halloween or costume ideas? >>I will be on the plane on
Halloween. I don’t think they’d appreciate me going to TSA with
a. Last year I was in Vegas and it was a crazy party. We have
the equivalent called Saint Maarten. It is next
Monday. I will take the kids out and get some candy.
>>Thank you for your time, Olaf. We will bring in our next
presenter talked about prioritizing data sources. Thank
you for joining me, Keith. Very humbling. A 100 level talk. It is a hard problem that we see
a lot of people come to us with. I’d like to give you the
opportunity to talk about the importance, there’s all these
great ideas, but making sure that foundation is solid.
>>I think it is one of the things that we’ve learned, we’ve
done all of these things over the years and some of them are
really complex. Data operations problems but most of them at the
end of the day, I know you figured something out so, it is, especially the principles
of trying to keep things like minimize complexity wherever you
can. A general problem-solving approach. And I have had, I’m sorry.
>>Start over? >> Cool.
>>The importance of making sure your foundation is solid and you
have solid footing and embrace those follow one innovative
ideas. >> The approach is trying to
simplify things as much as possible. Take it down to the
small number of things you need to understand really well so
that you can build a bunch of complex things on top of them.
That is a challenge. As engineers and security people we love to dig
into our problems and dig into data and there is absolutely a
place for all of that stuff. Like I mentioned in the
beginning the feedback we’ve gotten as we have done community
events and talking openly with other people around the conference. What is the
most interesting to you and what you want to learn about ATT&CK?
What do I do first? Do I make the red green chart for my
manager who wants to know I recovered if this thing happens? Or, is it just
understanding from a security architecture and engineering
perspective do we have the data that we need to do any of this
stuff really well? You can overlay threat models and make
that as complex as you need to. It seems like most
folks have a pretty firm appetite for understanding that
first step I can take. >>Absolutely. One of the
interesting things, once you get that foundation and you think
you’re ready for that second step, when do you know is a good
step to take? Is it a leap of faith are how can you
understand, I have sought the simple yet complex problems and
I’m ready to move on. >>That is a tough question.
Maybe the answer is that all of this stuff is iterative. I think
it comes up in almost every talk respect
about the topic. None of this stuff is ever done. You are
always making trade-offs. Do I do this next or this next? Effectively setting really
clear milestones. If milestone one is put in place of coverage
and just achieve visibility into the things that are most likely
to happen. That is your first milestone. You don’t have to get
super hung up on the operations side of that. That is your first
milestone and now figure out, what do I look at and how do I
understand of these techniques maybe I just picked 10 of those. Let’s go break that
off and put in place detection and make sure analysts
understand the context and that’s it. That is your next
milestone and move on to the next one. At every stage along
the way where there’s a trade-off, do I build it on top
of something over source or purchase a product? Do I build
the team or do it myself? Do I do this with a
partner or a service provider. Once you get beyond that now you
are in forensics. Really simple
milestones. >>I think I really appreciate
what you said about don’t just do things
but do that very thoughtfully. There are trade-offs in
complexity and a big problem is not just moving
forward but tracking progress. The milestone approach enables
you to not just look forward, but check backwards and make
sure you have maintained and solidified everything behind us.
>>Yes, I do not know, it is tough. It’s a super interesting general concept or thing to
think about all of the time. Even in the context of even if
you take things like ATT&CK out of it and everyone loves to
back on things like old school [ Indiscernible ] and when you
think about building a good foundation and nothing you can
build upon and the thing that enables you to solve hard problems even that is an
interesting tangent to go down and a thought exercise. Maybe it is a
particular solution is a great for detecting a lot of the
things we worry about today but there is 20 or 30 years of
things we don’t have to worry about because they are handled.
And then data sources is another way to think about that. Build
that solid foundation. Part of it is having confidence that you
are doing things thoughtfully and you don’t have to have a
perfect plan. Having a plan helps. And you can continue if you’re going to
be making trade-offs and hard decisions and the landscape is
going to change in the things you’re trying to detect, they
will all change over time. You want to have confidence to have
a plan and don’t over complicate things. Free your ability to constantly make
forward progress. Do not get hung up on where you think you
should be. Is today better than yesterday?
>>You are where you are for a reason. No one wants to get
burned like Mimi Katz. I think this is my
last question for you. You mentioned data sources and definitions. Any other
opportunities that we from the ATT&CK side to enable the first
step in building the foundation?
>>That is a really good question. I never thought
through what we would ask ATT&CK to do. There is a tough balance
there. And maybe the answer is the thing that you were doing
with product assessments and maybe that is actually the best
way to answer those questions. They are very environmentally
independent. Technics matter for different reasons and they have
different weight. I appreciate the level of abstraction that
exists in the data sources. They are really simple and if you
peel down into it the process monitoring, it is fun and gives
you a small number of consistent data points when a process is started. If you look at
visibility products, that might be more expansive. It is really different. In most cases
for instance if you look at the way processes and process
command lines rollup. Most products just jam those together. From an
implementation perspective process monitoring and
commandline monitoring are the same thing. You can go down
those rabbit holes all day long. For the sake of simplicity keep
it really simple and high level. The level of detail
that’s in there now is good enough to get people thinking
about it which is what you want. Don’t just fixate on the
technique. You have to understand how I observed this
thing in the first place. If nothing else as an attention
grabber. >>A big difference, completely
different roles but process monitoring. Thank you for your
time. >>We’re going to take a quick
break as we transition our next guest José and Roberto onto the
couch. We will be back in one moment. >> Welcome back. I am here with the
dynamic duo. Roberto and José Rodriguez. First comment, we
have known Roberto for a long time. Recently you have
introduced us to José. What’s it like working together?
>>I came to the states last year and I had to change all of my career from data science
to cybersecurity and working with Roberto has been a really
good experience. He’s my brother and my coach.
>>He brings a lot of the data analytics side and beyond the
basic query stuff. He likes to go to the statistical analysis
portion of what we do as practitioners. That is where my
expertise and his expertise lends in very well and we can
come up with a lot of projects together. He is not just my
brother but a friend also. We work on a lot of projects.
>>It keeps you on your game. >>One of the things you can’t
mentioning, first off we have been keeping a tally on Excel
being good or bad. You guys are pro-tran08.
>>We love them because they give this the freedom because
when we try to analyze data . Just basic filtering Excel is great but once you get into, get this
data source and this data source and do some compound analytics
and back in for statistical analysis, it’s a little complex.
That’s why it’s, Jupyter Notebook is been so flexible for
us . >>And then macro syntax.
>>One of the things I really appreciate especially that
session you guys did a great job on the data. I appreciate what
your talk was. Looking at all that content and taking the
metadata and taking it to be more specific. Can we go a step
beyond ATT&CK ? >>Well, actually, when your validating a specific
technique you’re not looking for just the technique but a
variation. When you are researching about a specific
variation you need to start asking yourself, okay, what data
do I need for the specific behavior I’m looking for and
when you go and check at what are the data sources I need
based on the behavior of the attack.
>>For example, trying to go deeper I think we open a lot of
doors out there. For example we see friends of mine like new
ways to try to do a specific technique. If you take that
technique back to the data source and see where certain, doesn’t trigger
data sources and sometimes that’s an interesting way to
start thinking about data. Every tech meet is not that way all the time. The changes depending
on the variation. It’s amazing to see how it drives those nude
data sources you need. Some of those noisy events that will
come up even though it’s very advanced from a writing perspective. It is
pretty cool that you are enabling more data sources to
start looking for you and that is a value I see into getting
detail and the data source aspect of ATT&CK.
>> From the ATT&CK team, you guys
are empowering people and we appreciate that. I think I
counted 50 during your talk. Validation. Can you speak to the
importance of that? >>That to me is something that is very
interesting. As practitioners when you talk to people that are starting in
the industry and see all of these codes and tweets about
specific techniques and analytics it is fun to run the
query and try to find specific things. When you come down to do
it for an organization and your program itself you need to
validate what you were doing and how it’s impacting even your
technology. So all of these different things play it big
role from validating what it is you’re doing and will definitely
justify a lot of things you do and allow you to think about
other ways to approach the analytic as well. You identify
if it works. I might need this extra context. It is easy to go with the specific rule and
apply it and say I’m good. When you go deeper you start enabling a
lot more of that context then what you’re missing.
>>That really brings in the research peace.
>>It is based on we need to add more context to validate, how can we say
this, how can we confirm the behavior in your environment.
>>You never really know. >>You would rather know, that is awesome. A
question from Twitter. Naming convention. Lord of the rings. Is a flavor of the
day or did you guys sit down? >>He’s really famous in the
community. >>We love Lord of the rings.
Since we were younger we watch that a lot
in Peru and that is something where rate try to put a personal
touch in there. It brings me back to my memories with my
brother. Those couple of things play a role with names of my dog
and stuff like that. I love my dog sent to me his name is Pedro
and he is so famous a different demo. Every motivator
said you will find my dogs and that’s the goal. Make the dogs
part of the community as well. Who knows he might have his own
twitter soon. >>Actually is really funny.
When we were preparing the presentation and going over
something specific, we remember the scene from the movie so it’s
really funny. >>We remember the Lord of the
rings stuff. It is actually very interesting how we put our own
personal stuff in there to share with our community and make a
real. >>We really appreciate you
empowering us. You plugged us contributing back to you. I
think you mentioned binder and slack.
>>I think one of the first things with this last channel we
posted with the link, one is to actually start conversations
about open source projects. I have joined a couple of channels before and it turns
into let’s talk about what we’re doing today and things like that
but we want to also pay attention to this amazing
initiatives in the community. Like back to you
guys, how do we empower the attack team. There are a lot of
good initiatives and a lot of things that as a community we
can build in a better way.
>>I would say going through some of the things we have out
there and seeing where things fit with the mission of attack
and it would be great to hear from you guys and say we could
actually work together and who knows maybe we can move it
directly to ATT&CK. It started as a little
thing and now we see him doing his training and that is
amazing. We can collaborate a lot of that. From the binder
perspective it’s something I talked a little bit about that
is pretty much trying to empower others that don’t have the
resources and might not have the expertise to build a Jupyter
Notebook from a dock or file . Not everybody is an engineer.
That is pretty much the reality. So our goal is to
say, this is a way how binder allows you to
share your work as long as is open source and free. You can
use the infrastructure to start running things like Jupyter Notebook and
validate your analytics. The beauty of that is that you can
have interactive queries being run through the binder and
Jupyter Notebook and the beauty of that is because we come up
with analytics and our data sources are not just giving you
two or three events we select we believe where you use that technique. We
give you a snapshot of data and analytics say look for this but
there might be other things we didn’t think about. And that to
me has been powerful. Every time I talk about a technique and an
analytic, sometimes we’re having dinner or lunch and I’m like,
man, this would be awesome if we joined this and this and that.
We go back to that specific note book and update the analytics.
That is what we want from the community. Check the work out
there and the analytics and tell us. This could be better.
This could be extra context. And that is what we want from the
community. Every data source is not only specific events we are
collecting. It is everything. Only three or four events. If people from the
community find a different way or something that we’ve missed,
let us know. We can contribute to that.
>>The power of the community. You never stop learning. Roberto, José, I
will let you guys go get some lunch. It’s okay to be slow.
>>Thank you very much. >>We will return and stay tuned. We will
be back in a moment with our final guest.
>>>> Welcome back. Alan Thompson from
yesterday. A great talk by the way.
>>It has been awesome. The first year I’ve been here. The
audience was great. The fact that your streaming it for a
broader audience is a great idea.
>>We get a lot of love online. One of the interesting things,
unfortunately there were two tactics exclusively network. Network
data is underappreciated and your talk is a great example of
how relevant it is. There is a bunch of opportunities for it is
very applicable. >> For sure. Even if the [
Indiscernible ] are encrypted you can learn a lot about them.
We have seen for example how seek can provide visibility to
certificate use and see people scanning behaviors and initial
attempts to do East West pivoting. It is a vital part to combining up with what’s
going on with the application and provides additional
visibility. >>What is a computer without
the networking. It is unfortunate we end up doing
that. What is your favorite attack technique?
>>I think ultimately what is most
challenging to find is not one particular technique but how
they’re chained together. One of the things that seek, how can you start to
understand behaviors across multiple techniques and how are
they put together. That particular
sequence is understood by her they’re trying to they will
pivot. What don’t they know about how I do my business.
Understanding behavior and identify patterns of behavior it can help.
>>One of the things from your slide, we have some diagrams.
Your ideal deployment. It seems like it’s really complex. Egress
opportunities, host to host opportunities and do you have
any tips? It’s interesting capturing
this but there is a lot of networking like why file anyway
to address those problems? >>I would say step one most organizations
should be looking at how they monitor their environment.
Firewalls, IPS is exists. I would say ATT&CK is a good way
of identifying where are the gaps of my visibility . If you
do not have visibility inside your perimeter, that might be an
area to look at using Zeek. So instead of initially
replacing some of these things you can actually use Zeek to complement
those technologies and build up your visibility . And if that goes well, start to
look at how can I think combined these capabilities down the
line. >>Speaking of gaps, what are
the common pitfalls when you’re trying to collect this network
data? >>I think one of the pitfalls
of collecting data is if you collect a summary of data that
is often times not a good use for security purposes. It can be
useful for things like application performance
monitoring so you know how much your network is being used. From
a security perspective you need all of it. And so that
immediately says how am I going to collect all of that and where
am I going to send it and store it?
>>You are opening up a can of worms. Oh my God, where do I
start? What is the most relevant data to capture and what types
of information is important. Things like communications, connections, initiation. Also
being able to understand sessions. It is
insufficient to have just one pocket. Understand the session
and entire exchange. How do you collect that and then process
it? >>What from the ATT&CK team side, it seems like you were
deep into drinking the ATT&CK Kool-Aid. What can we do to
help all those complexities?
>>I made a joke yesterday. Just simple things like
identifying the specific attack pattern. We actually, you have connected
to our data and easily find that. If you’ve got a database
or if you’re pulling it down and you’ve got it on your desk,
finding those attacks relevant to the name of the tack in our data. That was actually a
little harder than it should be. Another thing I would say is
unfortunately because of the time allotted to the presentation, I didn’t get
to talk about prioritization and how do you choose which tactics
are more relevant in one of the
things we actually experimented with was how do you prioritize
and score attack patterns? I think the MITRE ATT&CK frame is, being able to enrich it with not just how
serious something is but is it relevant to my environment?
Multiple layers of that’s going capabilities is very need to go.
>>That is a great path the beyond. For those in the
communities where this might be the first time they’ve heard of
Zeek do you have any recommendations for resources? >> Yes. zeek.arc Has a lot that you
can download. >>Thank you for your time,
Alan. Thank you for joining us for the ATT&CK.con session. We will be
back and a little bit with great afternoon talks. ATT&CK.con will
be back after we all get some food . Let’s do it.
>>We will be getting started in a
few minutes. Please find your seats. We continue. Please give a warm
welcome to Katie Nicholas. >>[ Applause ]
>>Thank you all. I hope you had a
great lunch, thank you for sponsoring that for us. You’re
not going to regret sticking around I know your little tired
but get some caffeine because there are some great talks
coming up. I want to start off after lunch with a splash, the
next speaker is [ Indiscernible ] who is a Senior Malware
Researcher, ESET. Many companies who started to map to attack
and that’s been awesome to see that they have been contributors
sending us different group software technique ideas but all
of that so Robert is going to be talking about why they use
attack in the first place and giving you examples. Those
tactics techniques and procedures were was love those
juicy details about how adversaries using those
techniques. Please join me in welcoming Robert Lipovsky.
>>Thank you. Hello everyone. Thank you for
the introduction. It is a pleasure to be here. The talk is
going to be about the most interesting techniques of two
infamous APD groups that need no introduction. Over the 12 years
that I have been working in malware research I have the
privilege of working with some of the most skilled reverse
engineers and threat researchers, so credit goes to
them for some of the discoveries that I will be talking about.
Before we get to the main scope of the ATT&CK an introduction
about how we use it, we started augmenting our threat reports
not only with Ayers sees but also ATT&CK mappings . This is just a snippet for the
example. The list is much longer, but you can get an idea
so there’s the technique and a description of how the
particular malware uses that technique. Also for similar
reasons, and in a similar way, we are not being ATT&CK in our
solution whenever that was feasible and
possible because there are very level of granularity for that.
Those were the external uses, leveraging the common text on
any aspect of ATT&CK but using it internally
is one of the gods for enhancing enterprise inspector and
improving the analytics , but I won’t go into the details
because that we’ve been hearing all about those things at this
conference. You might have read and grade log post on the
subject. Let’s get to those APD’s and the interesting
techniques. This is not going to be a comprehensive listing at
all that will make for much longer presentation. Just some
highlights which I thought were really interesting and
noteworthy. The first group and because of the lack of time, I selected
one particular software that they were using. That was APD
28. Last year we found out that they
used the first kit in the wild. This was a pretty significant
deal, running code from the SPI flash memory. The
first thing that runs when you start up your computer, before
Windows is loaded, before software has a chance to learn,
that’s a very powerful mechanism and gives the attackers an
ability to withstand not only complete life of the system but
a hard drive replacement. They use this against attacks, and
attacks against targets in Europe. You can read all of the
details in the paper. I will just mention that they drew
their inspiration from [ Indiscernible ] which was
legitimate piece of antitheft solution. And in the solution this powerful
persistence mechanism makes sense. If a thief was able to
get rid of the solution it wouldn’t be a very good
antitheft tool. It is featured in ATT&CK are listed under the
system firmware technique. There are some other examples of
the bear. Basically before no jacks unify work in the realm of
theoretical proof of concept. Or reported according to leaks
reported functionality of government agencies or the
hacking software. Let’s move on to the second group where we
will spend more time looking at their techniques. They are under
the umbrella term as sand warm. We go into a little
bit lower level of granularity. Which stems from the way we were
tracking them. The first threat cluster and activities related
to that was black energy. It facilitated the most infamous
campaign was that it facilitated the first-ever power grade
blackouts called by a cyber attack. Then there was [
Indiscernible ] referred to [ Indiscernible ]. This was I
would say one of the most cunning pieces of malware we
ever revealed. And the reason is that not just , it is able to communicate with
industrial control systems hardware using their very own
language. It had an implementation of four different
industrial protocol so effectively this
model was bridging the gap between IT and OT attacks. We
also saw a shift from black energy to what we call [
Indiscernible ]. There were a lot of things in common like
shared infrastructure, but the malware was different. The
reason why we call it that was because they were using the
telegram API for command-and-control
communication. And not only the tools change but the focus, so
it is hard to conclusively say that they are exactly the same
people, the same group behind it. And the focus shifted from critical
infrastructure in the energy sector toward the financial
sector. The most famous attack which spread beyond the borders
of Ukraine were [ Indiscernible ]. In parallel to the [
Indiscernible ], there was also energy which we consider the
successor of BlackEnergy staying with that original focus of
critical infrastructure and energy companies and still
active until today. There was just a quick overview of that
group. Let’s take a look at the techniques they were using. For
initial access, they were mostly using spearfishing that is not
surprising. With various methods of gaining
execution whether it was Pierce social engineering are also
using exploits even zero-based. This get to the more interesting
and not that common stuff. They didn’t supply chain compromises. Starting off from the infection
of [ Indiscernible ]. There were also other cases.
Another interesting technique was used by great energy, that
was exploiting a public facing application pacifically it was
trying to gain entry inside the network via vulnerable Web
server. A couple of other interesting notable techniques
by this group, there were black energy plug-ins [ Indiscernible
]. Targeting specific versions of chain of the settings, to
enable backdoor and remote unintended access to that
effective machine even so basically the ability to gain
regain entry even after all of the other tools were detected
and cleaned up the system. Another black energy plug-in
which acted out of parasitic infector. This was really
interesting because we don’t see a lot of these viruslike malware
nowadays. We saw some [ Indiscernible ] circulating that
were infected by this plug-in. The third example I want to
mention I guess the people in the room will recognize this, that was execution via HMI but
in this particular example commit was targeting somatic
simplicity HMI. There were also others that were targeted such
as [ Indiscernible ]. That would be launched by simplicity
in this case and it would run the first stage of black energy.
The details about this are this advisory so you can go
there and check it out. This is interesting because this was one
of the first indicators that this group had an interest in
critical infrastructure in ICS this was of course before the
blackouts happen. Speaking of, let’s talk about impact, because
if I was to describe this group with just one word I would call
it impactful. For the havoc they caused in Ukraine, whether it
was with blackouts, which left hundreds of thousands of people
without electricity. Or with the pseudo-route somewhere going
beyond the borders and affecting some of the world’s largest
corporations. This take a look at the in destroyer impact. The
primary one of course was the ability to deenergize that
substation. It was doing that as I already mentioned by sending
commands to these types of devices. Opening celebrities and
speaking the language of these devices. That’s an important
thing to say, there were no exploits, there were no software
vulnerabilities involving this. Full abilities however were
involved in the second type. It affected the operations that the
workstation that was carried out by the [ Indiscernible ]. This
also went after the protection relays, and it was abusing a
fault that was using vulnerability. Not in
those particular cases, and it rendered the protection relays
and responsive. The third impact although if we were to consider
the ICS attack it would be classified
little bit differently in a different level of granularity
that basically the third way that in destroyer affected the
operation of an electrical substation was by the data component so that one did not go
after the protection relays but after the HMI’s used the control
and monitor them, and the purpose of that was to amplify
the impact. To make recovery from the attack more difficult.
Wipers, these disruptive components are signature thing.
We observed an evolution of this through the years that we were
tracking them from the black energy plug-ins through
standalone components. We had filed encryptors
masquerading as rent somewhere reasonably [ Indiscernible ] but
also others before that that had the same basic idea. And, they
even in some cases they even through a little bit of a prank
intended to intimidate the victim so any [ Indiscernible ]
fancier, you will probably recognize this. To finish off
the talk, it is great that ATT&CK is evolving, Weaver
started using the one impact, the one tactic that there was
missing was impact and we are happy that that was added. If I
was to predict the future expansion I would say that we
can probably expect even more criminal types of impact being
added. We are already seeing the boundaries between threat types
of motivations already becoming the regular. With that, I thank
you for listening, use ATT&CK and contribute. Thank
you. >>[ Applause ]
>> Thank you. Thank you for an awesome
presentation. Will give a shout out from Twitter.
She is watching from Missouri. This individual has been
tweeting and has consistently been I showed some of his tweets
earlier this morning. Honestly this one became up with a
drinking game. You can reply with your own take a shot for
privatization, biases, reading an awesome tree, spreadsheets
are Excel, Jupiter notebooks and think the entire bottle and tell
requirements of presenter wears a costume. We will have to see
if we hit any of those. Please join me in thanking his tweets.
Come on up.>>[ Applause ]
>>Thank you. >>Robert talked about threaded
talent that pivots very nicely into the next talk which focuses
on purple teaming. The red team are still go off
and do crazy cool things, they do things that real adversaries
are doing. That is the idea behind adversary. The next
presenter will cover purple team where the red and blue come
together and is appropriate because of how it started. We will be talking about how the
Intel can help drive successful purple teaming and we will hear
from the success story one thing that I love about these
dissenters that come from two different organizations bringing
together two different perspectives. Please join me in
welcoming Daniel [ Indiscernible ] and [ Indiscernible ].
>>[ Applause ] >>Thank you. I am Matt, I work on the
security team at Priceline. You may or may not know
Priceline is a pretty small company which means we have an
extremely powerful but also pretty small security team. Sort
of definitionally we are all generalists, and that means we
skew more blue than red. Of course we see the value in
adversarial testing, but we have some problems with rent and
repeats third-party red team. For the last year we have been
doing purple team simulations. We found that to be a super
successful way to work together to level up both teams, and the
purple teams that we run are structured around the framework.
We are part of a holding company , we own kayak,,
open table, and I have seen the outcome of similar purple team
operations repeated at all of those brands. So we have learned
more from what they have done. Just to be definitional for us,
when I say a purple team I mean third-party adversaries working
with my in-house defenders for about a week, working through
what we can and cannot do. That’s the point of this talk is
what went well and the exercises, where we have
improved and what we will do next.
>>I am Daniel and [ Indiscernible ]. We will get
into it. What are the problems we are seeing? Security going up astronomically
especially the detection space that in terms of metrics and how
you can track prior to attack you don’t see sort of strong
ways to measure how well that [ Indiscernible ]. The other thing we are seeing is
the red blue confrontation often times we see teams have
different metrics that drive the success. Of the ones that are
opposed to each other. Impossible to share information
and [ Indiscernible ]. We are seeing some symptoms [
Indiscernible ]. We would just do a traditional
penetration test and have some awkward discussions. We will get into a whole bunch
of what-ifs and we realized they were doing a substitution where
they were trying to answer hard question [ Indiscernible ].>>I often said those things to
make sure you noted the report the strong detection so when my
boss reads it in as we are not complete. I think we had some of
the same problems with red teaming, one of the things that
frustrated me the most was repeat findings. Test over
tests, you know, [ Indiscernible ] I feel like there is one
sitting up there on my network. And I think part of the reason
we have those repeat findings, because the red team would run,
take a week to write a report, said the report, read through
it, that time it’s been a month and the locks have rolled and
the memory has gotten stale and we didn’t know if we saw this.
The loop was taking too long to feed back into the team to do
something actionable with the test that we wanted to do.>>This is our slide about why
we chose during an attack. When we were doing a purple team
before attack we would base it off of a good thing. With the
talk we were able to establish [ Indiscernible ] this also to
align with industry to communicate standards. That helps provide a live
defensibility for why the test is and what the completions are.
[ Indiscernible ].>>As part of that holding
company we are competitive. With the kayak team and the open
table team. We want to see how our coverage maps to theirs. We
have taken findings from these with the idea of going to the
vendors, and saying we bought this product from you we feel
like you should cover the specific things here’s something
that can trigger it why aren’t you helping us? And we function a lot by dashboards,
burned-out lists, and having done these exercises three
times, we know what we want to do, what we want to get to and
it is easy for us to visualize and so bite sized for progress.
>>What are our objectives? We have four key was the first is
to improve the detection capabilities to the target
emulation. The key word is I want to draw is [ Indiscernible
]. The organization understands [
Indiscernible ].>>The first round was mostly
covering [ Indiscernible ]. It has been talked a lot in the
last two days but really do we have the right coverage, and as
we read progress doing these we’ve been able to develop [
Indiscernible ]. It helps us that a roadmap for
18 months.>>There are number of
constraints so we are doing these over about a week keeping
people on the team. [ Indiscernible ].>>We think in terms of thread
families. It is sort of a threat actor and technique. We
prioritize those that are [ Indiscernible ]. Port knocking
is not something we prioritize. We put thought into it before we
run these, we know we want to test coming one [ Indiscernible
]. >>[ Indiscernible ]>>I think the most important thing
here is to know on our side when we should leave something for later. We
just don’t have the telemetry and we will not edit this week
set it aside we will pick it up later.
>>[ Indiscernible ] >>I would rather have a team of
engineers who have accountability around the
detections we create and respond to. Because of that we are
pretty disciplined about writing good alerts and having a strong
standard about how we produce an action. Almost all of our
telemetry throws through slunk and all of that
fires into a slot channel but this is awesome because it’s
eye-catching have these bus we can respond to we don’t have to
be in the office every alert we have will need to a query or
playbook that will show us exactly what is going on. We
also keep all of these on a big list and all of those are
taxonomies by tax every detection we have if it links to
a DTP that’s in the spreadsheet and someone who’s looking at it
from an alert doesn’t know why we wrote it originally is that
he has some context what it is what it was created and who
wrote it. Also a lot of these alerts we created through the
purple team process. And what is interesting there is we don’t
know who wrote this it could’ve been a red team or who wrote the
report and that’s what’s most valuable to us. The people who
are doing the damage. Understand what we see on our side whether
we have the log in or not and as Daniel said leaving it better
than they found it.>>My job is so hard because we
have to do everything perfectly and they do never hear from me.
The attackers are to get it right once and then it is game
over. That is kind of a lie I don’t think is watching, but
what we found is not only do attackers only have to make it
wrong once, but in our environment at least, attackers
make mistakes. And many , much of the telemetry or
detections is based around finding those mistakes. If you
did something perfect the first time we may not fire on it but
as the attackers are enumerating and feeling to the network that
is a gold mine of TTP’s that we can alert on.
>>[ Indiscernible ]>>It is just taking it through
the environment how do you get to the groundrules.
>>[ Indiscernible ] >>We have gone through an exercise
whether they create tickets for us come with a hand as one and
we create committees been great, at that point the tickets are
going back. >>What have we learned and what
did we get out of working with purple teams? I think the first
thing to remember is this is not a red team, not a pen test and
although we are bringing in a third party, we should
contribute as much resourcing and do just as much work. The
prep work is extremely important. We have learned that
over iterations the basics getting
stuff ready in advance but also knowing what you want to get out
of it. Bringing your cats and your flexible cats is extremely
important. Because what you think you’re
going to do on Tuesday may or may not work out, it is great to
be able to drop something and move on to diving into a couple
of those things, what was most important to us was
communication. A month before the purple team, we start
publicizing it to the tech teams. In the meeting we say
this is coming up, we let people know what we plan to test, we
let people know where we will be physically, so that they can
drop in and see what is happening. We set up a channel with the team make sure
they have their account set up before they show up. We agree to
a rhythm of communication of standups and everything else, it
is just so much better to have that ready when the constraint
the one week of red team as we have walked into the environment
that we think about what we think we will want to cover. But
we think we want to test what we know we want to ignore either
based on what we know from our CTI what we see in the
environment in the past we make some assumptions about what to
they will cover TTP’s we make sure that the purple team have
access to those tools. We put those to the test. More than
anything else is the communication, talking to
everyone in the [ Indiscernible ]. The way I described it across
the [ Indiscernible ]. When we do our PCI testing, I want no
findings of that report because why would I? In this case I want all of the
findings. I want to know everything that could possibly
be wrong so that we can prioritize it and fix it on her
own timeline with our own privatization. The most recent
purple team we did a few weeks ago new person I just joined my
team that we took this was fantastic on boarding for them. They had immediate access to the
tools, they solve the detections that worked and did not work and
they learned a ton from the adversarial testers and from the
team about what we prioritize in our environment. When we are
done with the week, we expect to have more detections, more
finding similar test cases. We can keep and run the
environment.>>[ Indiscernible ]
>>What do we want to do next steps? The first thing to do
when the assessment closes is to start thinking about the next
one. Your memory will never be fresher than when it is best to
start sketching out what TTP’s did we lack what telemetry do we
lack what we want to make sure we have covered when they come
back in three months. We are huge on tech sodomizing [
Indiscernible ]. I will show you how that works. Make sure that
what we already have is still working, repeat tests some of
them are revalidation the existing detections we have. And
we expect when we are done to have new stuff to hand to our
vendors to say this worked, this didn’t work. How can you help us
get better? Strategically we’ve used the purple teaming to
reprioritize what we are going to do especially the first
round, you know I was a little skeptical about command history.
When I saw many places that lit up it was something to
reprioritize and broaden to sin much more rapidly.
>>[ Indiscernible ] >>I mentioned we were fanatical
about alerting and having [ Indiscernible ]. We do the same
with the findings out of the purple team. You don’t need to
see my [ Indiscernible ] was the shorter refining and their is
text analyzed against attack. It is something that we contract
with the burned down charging me know where we are we know where
we want to be at the end of the spring at the end of the
quarter. >>[ Indiscernible ]
>>[ Indiscernible ]>>[ Indiscernible ]>>Thank you. Any questions?
Back [ Applause ] >>We have a couple of minutes for
questions.>>I’m going to take a moment to
award [ Indiscernible ].>>What am I, one of the things I
get tied up in is managing keeping track of what detection
roles are being used. I like how you guys covered that. My
question is with Excel sheets or Google sheets let’s start in the
debate, the question is does that scale once you start
getting into multiple data sources or maybe the rule doing
different things for instance if you have in my case I’ve Sigma
rules for my endpoint and I have [ Indiscernible ], do you think
that sheets of any sort can scale that rate work should
return to something else? It is an interesting discussion.
>>We are fortunate in that we are well funded with a big
spunk instance and pump everything through there. We can
really prioritize it that aligned what we see. But keeping
it in a spreadsheet is terrible. Spreadsheets are always the
wrong answer. The limitations of the environment we have today , we can do a daily backup of
the entire configuration keep that but we haven’t broken that
down to every individual detection and alert. My [ Indiscernible ] every
detection and alert is sort of atomically represented in a
library we can go in there and make modifications to every
individual on will then forward holding back maybe next year.
But right now we know when you need to solve a problem you open
up Google sheets. >>[ Indiscernible ]
>>[ Indiscernible ]
>>[ Laughter ] >>We can hug.
>>One more question. I particularly liked the talk about organizational dynamics a
bit of a hobbyist study or of organizational dynamics and the challenge of
the red team and the team together and why that friction
was there. Could you guys just offer a diagnostic on why that
is, what you think that is, and what you learned as you
instituted a purple team so we can learn from that and
encourage more purple team activity.
>>I think from our perspective it was people showing up once a
year, once a quarter. Locking themselves in a room,
and walking out with a report feeling very proud of themselves
everything they had found it doesn’t help with the team
self-esteem and it doesn’t as we discussed produce a good
outcome. And so a lot of it and I assume from the red team’s
perspective that is frustrating. Because they do the same thing
every week, they come back a year later and all the shells
connect again, I can imagine liking that.
>>[ Indiscernible ]>>We do the same thing with every
awareness exercise we presented every training we put our teams
through using afterwards and see how it is been received. I think
the last thing that really improves the red team the team’s esteem of the red
team is when they get in their own right some detections look
at the data sources and help out.
>>If you haven’t met [ Indiscernible ] yet you probably
should. Please join me in thanking our speakers.>>[ Applause ]
>>Give a shout out to [ Indiscernible ] on Twitter. He
asked for specific shout out so I am giving it to him that he is
thankful that it is live stream. For your break, part of another
awesome team update for your bit [ Indiscernible ] has been a [
Indiscernible ] for about 10 years. He’s been heavily
involved in the [ Indiscernible ] that he will give you an update on [
Indiscernible ]. Pretty exciting so please join
me in welcoming him. >>[ Applause ]
>>Thank you. I don’t have any spreadsheets, but I do have
means and I have a heat map. Drink a beer. First , who is familiar with [
Indiscernible ]? That is more than I thought. For those of you
who are familiar with it, it is not a plane or real car, what it
is is it is an acronym and website. It is an attack ribbon
that we have [ Indiscernible ]. It is
actively maintained which is nice because it wasn’t for a
while so it was really actively maintaining from 2013 to 2016
and it had a hiatus and now it is back in a big way. Besides analytics I think with
some interesting things that a lot of people don’t have. The
data model that allows you to do agnostic mappings to the data
model and use those. So have mappings of sensors although
there is a caveat on that which I will mention. There we have an
expiration tool called [ Indiscernible ]. It is a tool
for showing sensors data sources analytics it’s
really more like a proof of concept. It is a cool way of
showing if you add a sensor you get more coverage [
Indiscernible ] recently we’ve been doing a lot
of stuff and just trying to increase the quality and the
good stuff we had and make it better. Adding new analytics
only [ Indiscernible ]. That is pretty awesome. Another
one is we have converted our analytics to [ Indiscernible ]. Another one is we have added
implementation. This is an example of one. As you can see [
Indiscernible ]. We don’t really care we are
agnostic. Our goal is really to increase the coverage of these.
From a user perspective, [ Indiscernible ]. It is interesting thing it that
way. There is more. We have high-tech metal [ Indiscernible
]. Bizarre is a set of scripts that
we have written to detect several network based analytics.
It is cool. We are on the cutting edge of what you can
detect. [ Indiscernible ].>>[ Indiscernible ]>>[ Applause ]
>>Time for maybe one quick
question. Any questions? If not, there we go.
>>What inspired the [
Indiscernible ]? >>That is a good question.>>Did you and John have more
time? >>This is what happened we
realized we had this car thing it was still out there and
people do still use it. Maybe we should doing this again so it
was one of those for the moment things and is led to much bigger
things. That is how it happens. But from my perspective and you
can use for analytics. We heard from people who said
they want to see more. Grateful for all of the work.
>>[ Applause ] >>A Twitter shout out who is
streaming online. He took a break. We will head into the break a
short 15 minute break before we get to Adam. This is kind of one
of my favorite parts of the entire event. It is candy bar
time. We will have a brief break and this is your last chance to
visit exhibitors. This is our final episode with Jamie
Williams. Enjoy the candy. See you back here at 2:30 PM.>>How did it go? But I was happy
with a bit I hope the audience enjoyed it.
>>I know everyone is buzzing around. It was very interesting
to how has the experience been? But it was very great. The
organization was amazing and also the quality of the talks so
I’m having a great time. >>One of the things that I took
away from your talk and looking at your products is how like
invested in your products. Can you give us a little example.>>We are using ATT&CK in different ways without
going into all of the specifics and details and some other uses
probably spin up in the future, but the two different types of
uses are external and internal. The external ones are to augment
our reporting and communication with either our readers and
research that reports. Whether it is private or public public
for example. And basically like using leveraging the common
aspect. The other one is for internal purposes we have in DDR
solution and we have different rules and analytics for
detecting things and we combine that with basically the
capabilities and the coverage so the regular stuff.
>>One of the things for both days even last year throughout
the entire process how key that is, has there been any other
value added? I think that is most important
we invest heavily and threaten sharing and raising awareness so
that is quite visible from our reports and the
amount of publications that we do so that was definitely one of
the main benefits. >>We appreciate it. In your case studies you
actually kind of captured the evolving nature of ATT&CK. Different types of
ATT&CK and the impact it I wrote this question down and I
wanted to ask you come protected , do you want to go into detail
about you are forecasting? >>I think we already see some
criminal types of like software for example in ATT&CK but not
too many. There is quite a heavy focus on a PTs and
targeted stuff rather than the crime where and opportunistic stuff and
commodity but the problem is from what we are seeing and from
the ATT&CKers detecting across the user base is that there is
just so much of it and sometimes you can differentiate,
you sense the primary motivations but there are so
many overlaps and the boundaries they are blurred. That is one
aspect and it is difficult to draw high line okay so we will
focus on that and all of that, and on the other thing is for
responsible for the security of the organization they are worried about all sites
of threats they are not going to say a word about these agency
groups do not care about the others so these are the things
that they are meant to have awareness about. I think we
probably see that as well. >>Thank you for the
perspective. I appreciate it. We will get a quick break before
the next guest.>>Welcome back. I’m here with
Daniel and Matt self-worth I brought you [ Indiscernible ].
These guys deliver it excellent talk. I noted in the thing Excel as well as [ Indiscernible
] so we appreciate that to expect most of those are my
cats. >>[ Laughter ]
>>One of the interesting things is you talked about everyone
talked about purple team and you get repetitive. Responsibility
communication what innovative thing you said was
accountability and you quoted perform to the level they are
measured. I wanted to look at that and see where that
comes from and how do we capture that quick
>>I said that that is not my idea. Human nature is we like to perform and make the
numbers that are being measured to so when it comes to setting
objectives for tests or people in general, kissing those
numbers very deliberately to drive the debate over trying to
have those behaviors drive outputs and deliver thought into
what you are choosing and help make them more successful.
>>Excellent perspective. One of the questions is like there’s
been a lot of investments and [ Indiscernible ]. How do you maintain?
>>In terms of maintaining capabilities you mean quick
>>The culture. Not just the capability not a
technical problem not a people problem it is a very cultural
problem. >>For us it is commitment to
redo it. As soon as you finish one you schedule plant and think
about what you want to accomplish. When we do it it is time bound, and there is
stuff we know we have to cover. We would like to make it more
bite-size and smaller. We can you know do as many tests that
take a half-day, take one day. >>When we choose the engineers
that are putting out engagements they just may not make the best
purple team. Deliberately choosing the people we are going
to put on the engagements giving the proper training so they
don’t going blind so they know the process making sure that we
can scale that way as well. >>Is a culture [ Indiscernible
]. Everyone has goals around communication being able to
deliver a technical topic to a team of peers. Because we are
such a small team responsible for big organization is
important to us that that is the number one thing we can do.
>>I really want to circle back to micro purple team. [
Indiscernible ]. What do you mean quick
>>[ Indiscernible ].>>[ Indiscernible ]
>>Being deliberate about it. [ Indiscernible ].
>>If we write 1000 detections [ Indiscernible ].>>Excellent point. You
mentioned the need for bringing new people in. Even not just
hiring people out of school but coming out later in their
career, the purple team is a great example this is a complex
workspace. All of these different fields,
and advice to people who want to join this field? Where do you
start? >>There a lot of projects [
Indiscernible ]. Working with other projects in
the community and the way we sort of like to think about it
is being [ Indiscernible ]. Really just finding an area
where you want to specialize and focus. Not cutting off all
aspects of security so you know how it plays together.
>>Think the perspective it is vital to have diversity in your team including
diversity of experience. If everyone on my team has 10 years
insecurity, they are not going to think of something that
someone who has a completely different background might come
up with. Daniel, thank you so much. This
is our last session for today. I appreciate all of the
contributions from the community. Don’t forget the rest
of the day is scheduled for ATT&CK. >>>Miners attack on 2.0 continues. Please
give a worm welcome to KT Nichols.
>>All right everyone who is enjoying candy, please come back
and enjoy it here because we have one of my favorite parts of
the entire conference right now, our lightning talks. So grab
your candy to go, come on in, folks online you should go buy
some candy. I’m going to ignite a new debate because I feel like
we need to work on XL things. Candy corn. Great candy or best
candy ever? We will let that rage, rage debate on twitter. So next
up as I mentioned, we have our lightning talks. So my pleasure
to introduce our facilitator for those lightning talks. My
teammate Adam Pennington. It’s really great because I just get
to sit back and relax and kick at my feet while he does all the
work here. So Adam, that you realize he is a member of the
core ATT&CK team. He has been a MITRE for 11+ years. At
Carnegie Mellon, still does tons of stuff there. He is on
deception as well so you can never really trust he is telling
you the truth. These join me in welcoming for our lightning
talks, Adam Pennington. >>>Think you Katie. And
hopefully we will get the slides up soon. So lots of people have
talked about in the past that they really liked some of the
short talks that this conference has done. But you know, we don’t
have any 50 minutes talks. You know we are
just doing short, short all the way through. So this is going to
be even shorter. Just a couple weeks ago, we put out an
additional call for presentations, asking people to
give us their five-minute pithy war stories, their half-baked
ideas. And I think people really delivered on that. It was
actually, it was selective, so we had more than just the people
up here today. We are going to be running these back to back,
there is no time for Q&A. We are going to be going strictly timed to five
minutes. I’m going to be showing you the timings that the
speakers are also seeing. So you know how close they’re actually
going to get. So I’m going to introduce our first lightning
talks speaker. He was a special prize for not having any slides,
making my life easier. Brian Donahue.
>>Hello. >>Hello everybody I’m Brian
Donahue, I work right red canary. I have worked on last
year’s threat detection report, helped produce and write that.
Currently helping produce and write this years threat
detection report. And in the spirit of getting to the point
quickly in case I get cut off, I think that everyone should be
taking the threats that they observe on their network,
whether it is there on Enterprises network, or maybe it
is the network, or sort of the networks of your customers that
you are monitoring, in the course of
business. I think we should take all those threats, map into
MITRE attack and then release what we find in the form of
Evelyn’s rankings. The reason I want to do this is because I
think we should be creating a sort of, gray and unified miter attack heat
map. The reason I think we should do that is I don’t not
know that we make really great scientific decisions about how
we resource allocate insecurity. So the other day I was reading
an article in cyber scoop and it was about how the security
insurance companies are going to be in position to dictate the
tools that we buy. So they create a white list, if you buy
a tool on the white list you get a discount on your premium. My
initial thought was well, that is a horrible idea. Then I
thought about it a bit, it is like who is more incentivized to
make sure you don’t get breached, then the company
that’s going to have to pay for it when you do? So that got me
thinking, how do we make these decisions as is? And I got a
list, it’s incomplete. I’ll just say ahead of time, the list, each item on
the list has pros and cons, they are all flawed, and none of them
are scientific. So sometimes it got experienced people on our
teams, we let them use their intuition to decide how they are
going to resource allocate. Sometimes we
go and we talked to an analyst firm. Other teams read the news,
read about an attack, try and build up coverage for it,
circumstance plays a large role in this. Something bad happens
and you have to buy a tool. There are plenty of
firms out there that probably just do this based on sort of
regulatory and compliance requirements. And then I bet a
lot of us just kind of make arbitrary decisions. So the
question is, is there a better way? Of course the answer is
yes. All you have to do is figure out the threats that are
most likely to occur, and sort of focus on them first, and then
move backwards to the ones that are less likely to occur.
Luckily, MITRE attack gives us kind of a nice framework to work
with for this. It is sort of like takes this nebulous
indefinite thing, a threat landscape, and makes it definite
and sort of gives you a finite list of techniques and the data
sources you need to observe those techniques. So, I think of this
sort of in the terms of baseball. Forever you just sort
of stuck nine players on a baseball field. And now it is
how you play defense. With the advent of advanced analytics,
increasingly we are seeing in field shifts, so we have got
great charts for batters, we sort of know where they are
going to get the ball, and you move the infield accordingly to
neutralize their strengths. So I am sort of thinking that that in
terms of miter attack where the framework itself is a baseball
field. You do not want to just throw coverage at the entire
matrix. You want to figure out the hotspots. So a year ago we
started producing our threat detection report. The idea was,
let’s figure out the MITRE techniques we see
most often so we can answer a few questions, one being how do
I get started with attack, another being how to use
ATT&CK, and maybe a third being how do I prioritize coverage? So I’m thinking that,
if everyone does that, we won’t have red canary is very in
plaintiff centric view, of how you said
prioritize coverage in which threats are most likely to
occur, we should have everybody’s view on that. I want
to know what firewall companies are seeing, I went to see what
sort of email filter makers are seeing so we can get a really
universal view of how the threat landscape plays out on the
ATT&CK matrix. So, I found three
problems with this as I was preparing for this, there is
probably more, I am not that creative. But interestingly true
of those problems were solved yesterday. I will start
with the ones that have been solved. The first one I could
not figure out was, how do we normalize the data because the
company with one customer, their prevalence rankings are not
going to be the same as a company with 10,000 customers.
The other problem I saw was how do we aggregated all? I actually
think that MITRE is solving both of those problems
potentially with ATT&CK sightings. I’m interested to see
where that goes. Now the third problem I’m seeing is, it has
to do with sort of the nomenclature. Attack gives us a
nice common language, but we don’t have a
common language to talk about things like detection and alert.
We call them detections at red canary, and that is sort of what
we are ranking and applying to MITRE attack and we create our
threat detection report but that is not necessarily what other
people are doing, so go forth and create the grand unified
heat map. [ applause ] >>So, Dan posted a question to
us on twitter. He said, Halloween is coming up, so you
know, it is, what costume should I actually wear for my lightning talk? And Katie told him if he
dressed up like a pew pew map, she would get him a free drink
at our reception the other night.
>>So not only a pew pew map, but they should a badly mangled — note Mac [ MUSIC ] [ MUSIC ] [ MUSIC ] [ MUSIC ] [ MUSIC ] [ MUSIC ] [ MUSIC ] [ MUSIC ] [ MUSIC ] [ MUSIC ] [ MUSIC ] [ MUSIC ] [ MUSIC ] [ MUSIC ] [ MUSIC ] [ MUSIC ] [ MUSIC ] [ MUSIC ] [ MUSIC ] [ MUSIC ] [ MUSIC ] [ MUSIC ] [ MUSIC ] [ MUSIC ] [ MUSIC ] [ MUSIC ] [ MUSIC ] [ MUSIC ] [ MUSIC ] [ MUSIC ] [ MUSIC ] [ MUSIC ] [ MUSIC ]>>. [ Captioners Transitioning ] [ Captioner standing by ] One piece of attack tied to
another piece of attack type to another piece of attack equal
something else and there’s the state basis to it the proverbial
unicorn so we want our threat intelligence to be able to
capture that factor of it’s not just a checklist of things I
haven’t just actually things but I understand the context of the
order and date what’s happening. Of course starting with Katie
and always ending with Katie this could become a
machine-readable karma that we can actually use and
operationalize threat intelligence if we all come into
it together as an industry. Thank you .
>>Our next speaker up is [ Indiscernible ].
>>I feel so underdressed now guys
where is my onesie [ Laughter ]. I’m here today to talk about
ATT&CK intelligence and purple teaming before I get started I have to
give you the obligatory caveat that my view that I expressed
today are my own and do not necessarily represent those of
the Federal Reserve Bank of New York Federal Reserve system that
is going to go today. Purple teaming with swimming by that it
sounds very about so please forgive me but really talk past
couple of days people are doing detection people are doing
really cool heat maps and people are also doing purple team heard
a great talk about that earlier today but once we want to do
micro purple teaming a micro purple teaming work flow it
began as you might expect with yet another fusion initiative I
think we have all heard that coming down the line for our
leaders for another. We were being task saying all right to
collaborate together not just within the intelligence team but
also want to collaborate better across the entire organization.
We want to be talking not just to detection creation but we
must be talking about hot teams red teams were only 13
webinar socket we are not talking about and ATT&CK is
really great that your to be a common language if you well to
be the cornerstone. We had only gotten pretty good at the center
vertical so that is ATT&CK based
intelligence on the Intel reports into TVPs we were now
for detection creation right now internal report and wrote a
thing where we got many great DVDs we are separated was using
them? We are praying use cases for I think there’s a lot more
other way that we could be using this and fruition call came from
on high so he turned and that blue team I heard you during the
hunt engagement we got all these great TTPs great report hearing
go rent is really cool should last I hear is a bunch of TTP
here’s a great report here you go but we were starting to his
roadblocks that which is that you have truly lived until
you’ve written and 80 page report with TTP that is three
that because it’s got what got a spreadsheet attached to it. Is
no fun for anybody and you can imagine that people
are super exotic get that much into work with so there’s
clearly some constraints and even on the Intel site we are
finding that was the Intel be generated really TTP is being
generated? Truly high priority really really what should be
about or are you TTPs were being created a little repetitive
because there is customer Carmen instrument for really.on Lindsay
and if we are doing overlapping work here is human rotini area and similar
intelligence? We clearly needed to do something so our solution
make it micro that I these big reports the
work is picking that take a lot of time and how reporting let’s
go small so it all started with a single TTP which we got
abbreviated here left cozy pairs of UMI tactic technique we kind
of abbreviated procedure part suitable get Katie it was there
I promise. We gave that command to both team and blue team
things time . Teams that’s all right got his
command that I created to emulate this behavior and losing says I created is use
case that will be able to detect and be deployed simultaneously
in multiple different environments of the IRS or
Federal Reserve system so you guys know that we are pretty
distributed so different environments there we found that
yes we could detect it but we also found that particular
environment we didn’t so that leads us to the final results
which is that rapid emulation and validation
is going to help us be a lot more responsive to high priority
threat after activity and in particular is a lot better
assumption on coverage so that kind created is new work flow which is that instead of having
a big engagement we say all right we got in Intel determined
this is an event trigger that we perhaps a threat after his
behavior but maybe just one single behavior we want to
ensure we are covered tell you TTPs team is able to emulate
create detection is what your coverage is questions and answers into the
trigger that also were at the end of the day purple teaming is
great but let’s make it micro Thank you.
>>Thank you . Our final speaker brings of
the mode you to the stage [ Indiscernible ].
>>Thank you. I am Nick Carr in this is guard
rails of the galaxy so I work between that the intersection of
I guess hunting and a detection that looks across all of our
media services and across all the fire product text reverse
engineering and technical techniques so I talk about execution
guardrails? First of all this was our first technique our
first technique to the ATT&CK matrix my talk about them here
can immediately saying the U.S. uses their malware uses
guardrails and other nations do not maybe examples like that
were raised earlier by the ESA team and most importantly I
guess my attack celebrity Katie Nichols placement quote
for behalf saying that the hallmark of sophistication is
restraint so we agreed that sophistication means people are restricting
their activities and guardrails are manifestation of restraint
answer is looking execution guardrails interesting attackers . Quickly clear out a
combination of a definition detection concept just really
quickly are doing a couple of things the attackers are
checking environmental condition that air comparing the attackers
applied value. These are the behaviors in this particular
order that you want to look for I know great generalizing this
quick talk and minor also includes
environmental key protecting her chemo with the same
environmental information. These of those crypto functions in
that order again tied together so couple of detection issues
that we run into when looking for something like this the
first is that it catches a lot of recon if you imagine a
nontargeted fish that comes in and collect information from the
host of the environment and beacons is attackers encrypt
those results you want to that is always the wrong kind of
email guardrails were looking for. We see a lot of people
confuse what we are trying to talk here with more brought 80
invasions or VM invasions just looking to see
if I’m running within a virtual machine is what we are talking
about is attackers applied research little others less
interesting things. The other thing that I see a lot of and
occasionally share is the insider threats I guess to guard
rail detection which is legitimate uses of protecting
execution in this case here’s a bank macros that are actually
checking the computer name making sure it matches in
particular naming convention check is
strong to check the egress be addressing trying to before run
all the legitimate macro so that’s not malicious. What your
here or is the guardian this is the first three cardio words for
achievement in technical achievement and guard railing.
The first up is the offense is tool Denny guys from NCC group the
guys do some fantastic art railing while your chance to
look at some of the things they do but it publicly released guys
told to encrypt a shape a mold first one is a lot of this
example is needed for cash device. It looks for someone
where you expect to see their hours landing page looks at the
local dimensions and the local dimensions of version mobile
broadband customer router some is an example of the
kinds of things into a lot more of these kinds ways you can
restrict just like attacker today. The next one that uses
guardrails and close assess operation. Is a spicy. Current
detection to vendors I could write about how does it
really represent true detection but anyway that’s currently out
there on virus total at Charlotte copy hashes screenshot
I will share out but is malware actually look to see for a USB
wireless adapter and actually out in April access points as a
W profile for — and create a rogue access
point entry and try to connect that 42 so this is we believe
the attacker was then physically caused by with access points
that is malware was doing is that colors the to hear pretty
interesting interesting to after override these . Running out of practice
lifetime achievement award guardrails goes EPG 41 and you
have a chance to check out EPG 41 report you might be their
supply chain — you might not know about the Microsoft crypto
API individual user accounts on an individual system I realized
the key pillows. Alright. >>Think you very much Nick. [
Applause ] >>I’m not going to let it go
too far because what if we are technology people who share on
Twitter make all kinds of things that he shared on Twitter spicy
tweets techniques procedures cited attack so they Synack. I’m going to hang out here just
a moment to reintroduce Adam so get out of my second shake it
off while he talks over that was awesome that’s what my favorite
part of this and thank you so much to for connecting that he
has the things that I enjoy hearing your ideas. We will wrap
up attack updates from the team with discussing three that he
has left of pre-attack was referred a few times because
please join me in welcoming Pennington. [ Applause ]
>>Thank you to Katie for giving me
a second to switch gears there. This is unfortunate for our
online audience but if I can get a quick shorthand for how many
of you are operationally using pre-attack today? Two? Okay I
was a curious about go . One of the things that we told
all of our speakers this week was not to introduce attack and
we actually really appreciate that people have a pretty good
about that between both the external speakers and the ATT&CK
updates people have been doing but we know that not everyone is
actually worked with pre-attack in the past the wanted to
actually start their. Most what we’ve been saying really mean
ATT&CK this week is enterprise attack attack was originally
created to look at behaviors that adversary to activate
broken into environments. 17 a couple of years after that went
to look at some of the terrorist threat intelligence I go and
look at techniques for when adversaries would do before they
broke into a system. This is the graphic we have used for years
we are breaking up the kill chain into pre-attack and
enterprise attack. Just about a year ago we had an efforts to
take the launching compromise techniques tactics from
pre-attack cannibalize and create the
initial access tactic . This wasn’t the same
techniques but it did represents the same coverage the same space
and so we start the process of integrating pre-attack into
attack. If you remember back to Blake’s original beginning he
talked about the idea of ringing some of the different attack
efforts together and just making it all one big happy attack.
Rather than just go along and continue to take attackers to a
time our plan is to rip the Band-Aid off and make pre-attack
and attack all into one piece . This is the picture started
with this is the picture we have been preaching for years but
it’s not really quite accurate. Really where the space looks
like a little bit more like this . Pre-attack covers a space that
actually comes before the kill chain that often to the theory
intelligence planning goal some of the budget resource
management issues so it’s a bit less of the kill chain. In looking at how we actually
want to take stuff from techniques from pre-attack and
looking at some of that space often to just tell we had to sit
down and take a look at how can we design will be pull in and
what we don’t ? A member of our team Ingrid
Parker spent some time taking a look at this where she went down
and worked with us and came up with a couple of different
things and how to decide what goes in. First of all is that
it’s technical so it had something to do with
electronics, computers it is just a planning exercise —
isn’t just a planning exercise. — With attack we are mostly
talking about behaviors in your environment that if you did the
exact right thing you probably could see. You might want to
learn all of them but they are in the space that you control
and have visibility into pretext faces a little bit different
well-meaning meso defenders is maybe an ISP can say DNS
provider somebody of the chain actually has visibility into
where. Evidence that Mr. years so when pre-attack was
originally written there really was not much out there talking
about things that happened before intrusion one of the
things that helps lockable beers is things like indictments there
have been a number of reports out and other threat
intelligence companies as well have talked a lot more about
theory preparation getting ready for going into intrusion . I haven’t actually shown
pre-attack matrix yet but if you have seen pre-attack before it
actually looks like attack contacted from the top
techniques on the side so this yellow here is that intelligence
base. These are things like priority definition planning
thinking about getting ready for intrusion. We think those don’t
really fall into the three standards that I just sat down.
Looking across the rest of it we realize actually divided pretty
nicely into two different sections so got the green and
blow so you got this intelligence planning that we
are calling out of scope reconnaissance which would be making into a new
tactic enterprise attack and resource development. To give
you an idea what these look like I couldn’t the word draft on
here many more times and still have a slight render well so I
want to make sure that shows up in the background of your
screenshot. These are absolutely not the final technique names
they will change not they might change. Reconnaissance is very
focused on the victim so it’s focus on gathering information
on the victim itself gathering information on individuals
within the victim sort of what everything around the victim’s
landscape looks like. We also have resource developments.
Resource development has infrastructure and capabilities
for those of you know threat models that are attack victim
infrastructure and capabilities might sound a little bit
familiar is not completely accident so resource development
is the adversary action and building up the pieces that they
need in order to do their intrusion the building up I
guess it’s all certificate they need getting a hold of any
servers they want were going to be doing both Renée David building at the south is
different from where the requirement through mechanisms
were authority felt but this is the rough shape of what we think
pre-attack is going to look like see my notice that this is
really look like detected this are some techniques so this is not something that
were going to see in the next few weeks . A couple of people have
complained today that we did you release the last Thursday right
before ATT&CK con people have to change their slide is going to
be a little let coming we saw some whiteboard sessions that I
should say that the techniques and that techniques you are
seeing here today our process a number of whiteboarding sessions
between Katie Nichols and myself. I really intentionally
wanted to get some time for questions because have entirely
talk a bunch of you who are interested in PRE-ATT&CK I think
there might be some . Without further is due does anyone have
any questions?>>One question that came to
mind right away and I guess it could be close to
resource gathering that’s what I’m trying to figure out where
it would fit. Supply-chain attacks where you do things
before hand you know supply-chain attack is but the
question is do you think you’re going to integrate those into
PRE-ATT&CK like as resource gathering or something like that
that to bottom 20 50 team is taking over so you can did U.S.
were you thinking that is out of scope I would like to know?
>>At least a piece of supply-chain attack is already
an initial access so something really actual doing of it we’ve
already included in ATT&CK. I would expect any sort of
electronic preparation of the battlefield will be within scope
. We are probably not going to go as far as somebody sneaks
into a Chinese factory and tells Bloomberg about it but we are
looking at just the technical pieces. Somebody breaking into a
software development website or something like that kind of the
more traditional supply-chain attacks we really see in the
wild. I would absolutely the scope.
>>Do you think this is going to
fuel discussion around Sprint monitoring and the importance of
possibly got topic?>>Someone with the detections and
mitigations are likely to look like initiative we actually have
technique ready for this at all yet. The way we actually develop
new tactics to be that we sit down figure out what the tactic
— techniques are within a tactic be a signout technique
ready. We have got to last step yet. My suspicion is that the
detection and mitigation for some of these techniques around
the domain names and registering them is going to be around
Sprint monitoring so that is sort of a natural place trash
take a look at to see some of these things happening.
>>Other questions? If there are no other questions
please join me and thinking Adam Pennington. [ Applause ] Thanks for that Adam . We hope you enjoy these attack
team updates that was feedback from last year that we heard you
wanted to hear more from the team so I figured we would give
you a preview what we’ve done and what yet to come so
pre-attack should be coming soon. Now my pleasure to welcome
Blake back to the stage as we start to wrap things up here at
attack on 2.0 . It is been an amazing three
days not because of all of you folks here in person folks
around the world people have wash from all kinds of locations
and the part of the global attack community just a couple
my current from Wisconsin means that she’s maybe I have to say
that name folks in Poland are having a party there and
Australia listening in from Germany UTC +11 so even middle
of the night they to do to attack on to point out mobile confirmed this morning
they’re watching yesterday they checked this morning with us as
well it speaks to the global nature of this amazing community . I’m going to turn it over to
Blake to give you a recap of birds other sessions which were
Monday afternoon. >>We had several birds of a
feather sessions the topics like some techniques cloud controls
ICS’s we went through some of the topic note that have been
collected by are facilitators and to do a quick recap of those
. One of those sessions was for [ Indiscernible ] detection some
of the discussion length words to think about the
confidence levels of detections rather than just making a binary
detected or not detected that really tell you where to
coverages because you know several different ways you can
attach something and a lot of times picture assessments of how
you detect in your own environment based on the tools
and resources you have which is really cool. It’s also important
to help red and blue come together for purple seeming that
way you can do a lot more rapid detection enhancements the red
and blue alert for each other and you will improve your
detections a lot better over time . If you’re wondering where to
get started and is looking at your data sources is really the
best place to start because you need to make sure you have the
right available to get started and detecting threats.
>>I heard a lot about today either.
>>Another one of the analytics session. One of the topics there
was we needed to encourage users and communities writing
exchanging is very tough for a lot of people think that is how
they’re environment is very difficult to sure not without
risk to potentially letting out there is no exactly how they’re
being detected but we think there could be ways to
generalize the information that we can share ideas without
risking her own environment . Another one was there’s a lot
of properties around analytics that we need to think about it
capture so there’s easy elevation by adversaries if your
detections are based on static strings and they are able to
change those of policy brains and things like that. Another
one was top attack public analytic
coverage yet this is something we are trying to change with car
so we are looking for contributions on these so if you
have session work for you in certain areas are comfortable
sharing those we would love to take them and help the community
in those areas. Another one was analytics tied to specific tools
can be hard to read by other analysts when sharing salons and points for trying to
generalize it in a way that is applicable to multiple
environments and multiple tools. Another one was assessments
socket assessments and attack based assessments so dealing
with abstraction can be a pain points sometimes it’s hard to
determine whether the procedure or technique so that can make a difference on
hire actually assessing something in which you results
are. Measure coverages doable the hard part is tracking that
over time so how often do you do assessment how to progress from
there is a big problem for people so the more information we can
get best practices in that area I think would help the community
a lot. We have feature requests so understand resume creating a
single or chart to communicate level and this may be due to
attack being too large to complex need a better way of
holding things that we to focus on to make better decisions
which is really good point. Cloud another good topic we
heard that a lot organizations are pushed to the cloud there are several reasons it
could be accosted measure it could be the technology is
moving to the cloud that they rely on so a lot of people are
happy to see that we are covering cloud and ATT&CK now
which is awesome . Some people felt the SAS platform task may
not be that interesting and they are more interested in specific
cloud services and techniques those and we are trying to cover
both like I said earlier my presentation yesterday we think the cloud is a dynamic
environment and platforms are likely to change over time so if
you find the SAS is useful deprecated down the road in
favor of more specific cloud services when we get more
specific techniques and for the services. Another point was that
identity and access management is really the new parameter for
a lot of people because it’s just basically an account
how you get access to the account how to use those
credentials and credentialing materials access the resources
because obviously there is going to target those things. Another
one was our purple and red teaming session so running blue
teams that it is a leadership chain is really important
because that helps collaboration that frees up resources for them
to do and share similar goals. Another good point is the need
to avoid getting things to adversarial because if you
create that friction between teams going to make progress in
ways that they need to to improve security and the
environment and the loss point was purple teaming really red
teaming done right and I completely agree. Another good
one some techniques so big topic a lot of people’s minds these
days. We got a lot out of it just as us on the ATT&CK team
we had a relatively small group that was a focus session but a
lot of good feedback so far we are able to cancel are concerned
of structure some of the techniques which was great. What
the participants who shall not be named did say that his
organization has created 900 techniques as extensions to
ATT&CK and is that techniques will literally save his life so
great to hear. Literally. Another good point was there’s a
lot of concern over whether or not it’s easier or difficult to
attack assessment growing the general consensus in the
assertion that some techniques will help people learn about
ATT&CK easier because it’s more refined it’s easier to pin down
exactly where something should go because it’s more definition
around it. The end of the session was are a I and and all
session so there is obviously any to be able to describe the
Intel more normalized and efficient way so that I know can
be used better across it but it is a hard problem and the second
point was is often hard to describe how about based
detections map to techniques because of the more [
Indiscernible ] nature of how the technology works.>>Lots of conversations I know
go back and forth with Blake on this one we have been hearing from you
the community and we always try to take feedback but these
readings are often for us to be able to 70 conversation in the
hallways over some candy or Lacroix what you are thinking
what you need for the first one Blake should do some techniques?
>>Yes of course>>We heard you think cycles so
much are saving lives here with that techniques conversation I
have a few people we want to know how can we help you better
use ATT&CK and sometimes that is not sure how to get started
with it that quite a bit so Patty returns will patient
website very that you see in the team put in
20 but we are looking for work that I think you as a community
can help us with this to it’s not just us the power of ATT&CK
is an you all the presentations some people I started using
ATT&CK is back on all of you but that’s on our minds as well .
Next to cover 100% of ATT&CK? >>No.
>>Let’s say that again Blake 100%
coverage is that realistic? >>No .
>>You need help explaining that I will get hundred percent
coverage that’s crazy perfect disability known as perfect is
as we talked about again and again no so we hope to some
content to help you conveyed to your stakeholders where they are
. You what is the doing next class we are conversation you
really want to know more from us about where we are going to try
to write a blog post we can do better and we will do better
because we realize that just curiosity you have to plan out
your teams over the next year, two years, five years. What we
do is crazy but what we do influence is what you will do so
some of the past three days to make sure you know that we hear
you. As research wrap up I love this week from Christian the 20 he commented he knows
it’s been a good attack, and his shower thoughts are filled with
ideas of how to better apply ATT&CK one this is all about it
started off by saying we hope you will come together share
ideas about was working is working so well ideas to make
your organization favor. I’m Christian I have to [ Indiscernible ] Parson noted
challenges and the common slide that we saw across a lot of
presenters and yes Thessaly} there are a lot of challenges
ahead of us but we got it and we started together. Go back to
what Tony started off with the friends we made along the way difference we make love the way
we’re going to figure out how to defeat there is a member
networks so problems for a single word whether it’s here
with respect to your own organization.
>>I like to give a shout out one of original mentors on the
ATT&CK project which are gains he was the person who helped us
find the original research that went into it so we all out to
him and the effort that you put in to managing our research just not the attack research but
also the broader research program and letter over several
years with the one he tried to still and I was he would always
ask us what is your star? I can confidently say that it
community. We are doing this for you because we want ATT&CK to be
better to serve you guys so thank you very much you are the
reason this. >>That’s awesome I love
Northstar. We love this sweet ATT&CK Con 2.0 is often change
my mind change their minds to IT security Twitter there is
going to be out from ATT&CK Con just like the contacting
feedback to make it better we want to make this event even
better next year. As many of you know we can’t put on these
amazing events without a lot of people want to take a couple
folks all of our anything speakers put together this
research on the slide that’s not easy so thank you speakers are
there’s anything invested in making is amazing events in
person at anything you offer so much for us to chat all of you
here and claim Trinkle quite you can’t eat
snacks as well as the online intended from around the world
thank you for tweeting us for engagement is here because it’s
exciting to know that you are watching from around the world.
Our audiovisual team we think/ offer a to the team Larson the.
I wanted to get some special shout out to a
few people there are so many people seems pretty much
everyone was patient Thursday helped out from the ATT&CK team
better strategic communications team is been unbelievable in
playing basically since last year’s ATT&CK Con [
Indiscernible ] that’s just a small portion of the many
communications and events folks behind the scene so let’s give
them a round of applause. [ Applause ] One thing that we learned as we
worked through all of our attack communications externally as
well as this event that things work really well technical team
and communications folks work together. This community thing
is different here so what we found this amazing partnership
where for example which they did to me so now what we techniques
to become logically sequenced because not that anybody would
help her in less than an — the actual group pages and
trying to think of realistic chaining of techniques. What a
great partnership between graphics between the technical
team to make those things happen. I wanted to give you
shout out to the attack team I Pennington so much work on this
so please join me in thinking for everything he has done. Over many months he was helping
me last night 10:00 it straight shot between. Other folks in the
program committee Jamie Blakely John and Andy and Jen Burns was
no longer with the team appreciate your efforts as well
get a special shout out to Jamie was amazing on the attack on
couch. Well done Jamie [ Applause ] . So many other people to think
but we are just really grateful and humbled that everything that
you do& Do it we do every day and we love so much so thank you
all for coming. We really appreciate it and we hope to see
you next year if not before Twitter in person at
conferences. Thank you all so much and we will see later. [
Applause ] Thank you.

Leave a Reply

Your email address will not be published. Required fields are marked *