Architecting for Security in Higher Education

Ed Moyle |

Guest Author

top-of-mind-threat-modeling-post
Estimated Reading Time: 4 minutes

There’s a saying sometimes attributed to an ancient Japanese proverb (though the origin is most likely apocryphal) stating, “fix the problem, not the blame.” The gist of this is that it’s more important to address an issue than to focus on whose fault it is. There’s wisdom in that—particularly when it comes to cybersecurity. Security pros know that fixing the blame is an all-too-common reality in the immediate aftermath of a security event.

We can see this not only from personal experience if we’ve ever lived through an incident, but also from the headlines. For example, as I write this, details are emerging about Facebook storing millions of unencrypted Instagram passwords. Mark my words: There will be significant blowback and blame assignment down the road. It’s easy to slip into the blame game, but I think there’s a much more practical approach here—specifically, by using it as a proof point and learning moment for how to improve our own posture.

Now don’t for a minute read that as me letting Facebook off the hook for what is, frankly, an appalling situation. But think through the likely circumstances under which it arose, and you’ll find that it’s something that is all too likely to be lurking in many of our own environments as well. In fact, in a higher education context, similar gaffes (unless active measures are taken to prevent them) are in all probability more likely to occur than in a shop like Facebook. It’s important that we understand how these situations can arise and what we can do to find them when they do and have the discipline to follow through in making sure we close the issues.

Changing Priorities, Changing Threats

Imagine for a minute with me what it must have been like to work at Instagram in the early days—that first small team working on a shoestring budget, pushing through late nights and weekends to get their fledgling (and highly speculative) new application off the ground. For them, is the higher priority getting their product out the door quickly, or making sure that user password manipulation adheres to best practice? It’s the rare startup indeed that can forego an on-time release in furtherance of a customer-invisible feature like password handling.

Maybe you’re thinking, “But that’s a startup. We’re a renowned institution. It’s a totally different thing.” You’d be right of course, but there are some contextual similarities that can lead to similar outcomes. For example, most institutions have a hodge-podge of heavily customized commercial applications: open-source, in-house developed (or, in some cases, student-developed) applications sitting side by side. Likewise, from a security perspective, there’s never enough money or time to do everything we know we need to do, assuming of course that the people building out the application know what they need to do to secure the applications in the first place. In light of that, how likely do you think it is that all the steps have been taken to ensure the applications we use adhere to best practice? Not likely, right? Particularly when you factor in the whole roster of secure application design goals like password/secrets management, cryptography use, communications, and input filtering.

The key dynamic is that there’s a fundamental change in both the priority of security, as well as the threat context, over the lifetime of a given application. For example, what might be considerably small egg on the face associated with an issue like cleartext password storage for a small startup is woefully inadequate for a large concern like Facebook. Likewise, a small user population of a few thousand users is a whole different ball game than an application used by millions of people every day.

The Lesson for Us

For the practically-minded technology leader, the question becomes less, “Who’s at fault?” but instead, “How do we address this?” Doing so isn’t hard, but it does take discipline. One approach that I’ve found effective is to employ application threat modeling—meaning keeping a (relatively) current threat model of the application, with periodic re-evaluation, to account for the natural changes in usage, in the threat landscape, context, priority, and other factors.

What is threat modeling? It’s a process that allows a security architect to systematically deconstruct an application, map out the individual pieces or components, highlight potential weak areas between (and in) them, and construct a list of potential weak areas or areas of concern. It’s beyond the scope of a post like this one to outline specifically how to use the approach in detail, but the OWASP page (or the excellent Wiley book) will lay it out step by step. It’s workmanlike, systematic, and potentially a little time consuming—at least until you’ve done it a few times—but is incredibly valuable for situations exactly like what Facebook is struggling with today.

There are two reasons why this is so. The first is that threat modeling is one of the better ways—I’d argue it’s the best way, though I’m sure some would debate it—to identify application security issues (i.e. to know when and where potential security problems are within the application in the first place). Usage can escalate quickly, so it’s best to know early on where potential latent issues lie so we can identify them and mitigate them and track toward eventual resolution and closure as the application is refined and maintained.

The second reason is that usage parameters and priorities change, so having a road map of the potential and still-open issues lets us know quickly where weak spots are. It’s like shining a flashlight on those weak areas when the situation changes. Should usage change, we can consult the threat model in light of the new parameters and make plans to address anything that has become more problematic in light of the change in context. Long term, it’s a huge timesaver. It does take some discipline and a (small) up-front investment in time to do, but in the long term it will absolutely pay for itself. In fact, it might even save your bacon.


The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy or position of The Tambellini Group. To express your views in this forum, please contact Hilary Billingslea, Director, Marketing Communications & Operations, The Tambellini Group.

©Copyright 2019, The Tambellini Group. All Rights Reserved.

Share Article:

Ed Moyle |
Guest Author
photo
Ed Moyle is the General Manager and Chief Content Officer of the Prelude Institute. In his 20 years in information security, he has held numerous positions including Director of Thought Leadership and Research of ISACA, Senior Security Strategist with Savvis, founding partner of the analyst firm Security Curve, Senior Manager with CTG's global security practice, and Vice President and Information Security Officer with Merrill Lynch Investment Managers. Mr. Moyle is co-author of "Cryptographic Libraries for Developers" and a frequent contributor to the information security industry as an author, public speaker, and analyst.

Other Posts From this Author:

The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy or position of the Tambellini Group. To become a Top of Mind guest author, please contact us.

Realize Your Institution's Goals Faster with The Tambellini Group®

Higher Education Institutions

peertelligent

Solution Providers & Investors

market insights

Become a Client of the Tambellini Group.

Get exclusive access to higher education analysts, rich research, premium publications, and advisory services.

Be a Top of Mind Podcast featured guest

Request a Briefing with a Tambellini Analyst

Suggest your research topics

Subscribe to Tambellini's Top of Mind.

Weekly email featuring higher education blog articles, infographics or podcasts.