diff --git a/404.html b/404.html index b0be31bc..e6efc348 100644 --- a/404.html +++ b/404.html @@ -3,4 +3,5 @@ nav: projects title: Page Not Found --- + That page doesn't exist diff --git a/_data/data.yml b/_data/data.yml index a32392a0..99c18a14 100644 --- a/_data/data.yml +++ b/_data/data.yml @@ -681,7 +681,7 @@ projects: name: "Testing Tools" anchor: testing_tools - intro: "Though all of SSL’s research initiatives—past + intro: "Though all of SSL's research initiatives—past and present—revolve around ensuring secure operation of computer systems, the specific areas addressed by the team vary greatly. Initiatives are grouped into the following categories:" @@ -700,26 +700,6 @@ projects: malicious update is installed. TUF is a comprehensive, flexible framework to secure software updates even in situations where the software repository is compromised." - - "Developers can integrate TUF into any software update system, or - native programming language due to its concise, self-contained - architecture and specification. In 2019, it became both the - first security project and the first project led by an academic researcher to achieve - graduate status within the Cloud - Native Computing Foundation (CNCF). - Buy our merch!" - products: "TUF is used in - production by a variety of - companies, including Microsoft, IBM, - VMware, - DigitalOcean, - Cloudflare, and - Docker. - It has been standardized for Python as documented in PEPs - 458 and - 480. - TUF, and Docker's popular implementation of TUF, are now - projects as part of the CNCF." people: - *sebastien_awwad - *marina_moore @@ -746,33 +726,6 @@ projects: comprehensive array of security attacks, and is resilient to partial compromises, while addressing automotive specific vulnerabilities and limitations." - - "Uptane was named one of - the Top - Security Innovations of 2017 by Popular Science Magazine. - Uptane is a Joint Development Foundation - project of the Linux Foundation, - operating under the formal title of Joint Development Foundation Projects, LLC, Uptane Series." - products: "Uptane has already been adopted by - multiple auto makers. - Uptane has been - integrated - into multiple products including OTA Plus and ATS Garage, two over-the-air - software update products from Advanced Telematic Systems. ATS also integrated - aktualizr, - a C++ implementation of Uptane, into Automotive Grade Linux. - On January 25, 2018, Airbiquity - announced receipt of a BIG Award for Business - in the 2017 New Product Category for its OTAmatic - program, in which Uptane is a key component of the security package. - Our website contains high level - information about the project, including the - Uptane Standard for Design and Implementation v.2.0.0 - and - Uptane Deployment Best Practices. - We invite all - security researchers and academics to perform a - security review of Uptane." people: - *trishank_kuppusamy - *sebastien_awwad @@ -807,26 +760,6 @@ projects: signing information about each step in the process. As such, in-toto provides accountability about how software is written, packaged and distributed...and by who." - products: "The in-toto software has already been integrated into several - open source projects. In 2019, Datadog announced the use of TUF - and in-toto on their agents integration downloader. In November 23 - of 2020, the framework released Version - 1.0.0, and on March 10, CNCF announced - the project had graduated to the incubator. Also, a constellation - of rebuilders are - generating in-toto metadata so you can check your Debian packages were - built reproducibly when using apt. We - welcome you to download the in-toto instructions, which includes a demo version of our - software, or to clone our repository and follow - the directions to integrate in-toto into your software project!" people: - *santiago_torres - *lukas_puhringer @@ -856,19 +789,6 @@ projects: lets you use new cryptographic algorithms (SHA256, etc.), protects against other attacks Git is vulnerable to, and more — all while being backwards compatible with GitHub, GitLab, etc." - products: "gittuf is an incubating project at the Open Source Security - Foundation (OpenSSF) as part of the Supply Chain Integrity Working - Group." - people: - - *aditya_sirish - - *patrick_zielinski - - name: "Billy Lynch (Chainguard)" - link: "https://github.com/wlynch" - - *neil_naveen - - *yongjae_chung - - name: "Reza Curtmola (NJIT)" - link: "https://cs.njit.edu/people/curtmola/" - - *justin_cappos tags: - *security @@ -915,15 +835,6 @@ projects: enhance software resilience by leveraging the Lind sandbox and Intel SGX to create highly secure computing environments for critical operations." - products: "" - people: - - *yaxuan_wen - - *yuchen_zhang - - name: "Marcela Melara (Intel)" - link: "https://masomel.github.io/index.html" - - name: "John Kjell (TestifySec)" - link: "https://www.linkedin.com/" - - *justin_cappos tags: - *security @@ -942,19 +853,6 @@ projects: objective of TAF is to ensure that documents stored in Git repositories remain accessible and verifiable, not just in the immediate future, but for decades and even centuries to come." - products: "TAF is already being used by about a dozen governments, - including the District of - Columbia, Baltimore, and the State of Maryland to secure their laws. It is also used by a - variety of law libraries. For more information visit the project's site." - people: - - *renata_vaderna - - *patrick_zielinski - - name: "Dusan Nikolic (Open Law Library)" - link: "https://github.com/n-dusan" - - name: "David Greisen (Open Law Library)" - link: "https://github.com/dgreisen" - - *justin_cappos tags: - *security @@ -971,26 +869,6 @@ projects: look to build a firm empirical foundation for reducing code confusion in software development and, thus, also reduce the frequency of buggy and malfunctioning programs." - products: "The project - website provides background on our theory, studies, and analysis for - this work. We make all of our study materials - and anonymized data openly available so that other researchers can - replicate, validate, and build on our findings. Our results have been - used to fix bugs in a variety of software projects, including the linux - kernel." - people: - - *dan_gopstein - - *lois_delong - - name: Phyllis Frankl - link: "http://engineering.nyu.edu/people/phyllis-frankl" - - name: April Yu Yan (UCSD) - - name: Martin Yeh (PSU) - link: "http://martinyeh.com/" - - *renata_vaderna - - *yanyan_zhuang - - *justin_cappos tags: - *software_engineering @@ -1012,14 +890,6 @@ projects: shows that CacheCash scales to meet the workload of even the most popular services used today. By building CacheCash, we intend to change CDNs by more readily and pervasively including end-user served content." - products: "We are in stealth mode! If you want to be contacted when we - publicly release, please send an email to cachecash@googlegroups.com." - people: - - *ghada_almashaqbeh - - name: "Allison Bishop Lewko" - link: "http://www.cs.columbia.edu/~allison" - - *justin_cappos tags: - *security - *cryptography @@ -1041,18 +911,6 @@ projects: access basic system requests—are much less likely to contain vulnerabilities. This limited kernel access reduces the possibility of interaction with flawed code." - products: "We are in stealth mode! If you want to be contacted when we - publicly release, please send an email to lind-dev@googlegroups.com." - people: - - *nick_renner - - *vidya_lakshmi_rajagopalan - - *yaxuan_wen - - *yiwen_li - - *justin_cappos - - *sanchit_sahay - - name: "Brendan Dolan-Gavitt" - link: "http://engineering.nyu.edu/people/brendan-dolan-gavitt" tags: - *security - *systems @@ -1075,11 +933,6 @@ projects: project governance with well-defined maintainer and contribution processes, and encourages high documentation standards, including comprehensive READMEs, changelogs, and support guidance." - products: "" - people: - - *justin_cappos - - *marco_de_vincenzi - - *ann_malavet tags: - *security - *testing_tools @@ -1099,9 +952,6 @@ projects: sanitization functions to limit data exposure, and ensures least-privilege access to MCP tools. Unlike traditional information flow control systems, ShardGuard avoids complex labeling and taint tracking, offering a more practical yet effective path to safe LLM-driven automation." - products: "" - people: - - *justin_cappos tags: - *security @@ -1117,16 +967,6 @@ projects: deployment. The program enables software developers to identify vulnerabilities in product designs long before they are packaged and released." - products: "We are in stealth mode! If you want to be contacted when we - publicly release, please email crashsimulator@googlegroups.com." - people: - - *preston_moore - - *justin_cappos - - name: "Phyllis Frankl" - link: "http://engineering.nyu.edu/people/phyllis-frankl" - - name: "Thomas Wies" - link: "https://cs.nyu.edu/wies/" tags: - *software_engineering - *testing_tools @@ -1146,12 +986,6 @@ projects: are forced to crack passwords in sets. This increases the attackers’ level of difficulty, making a PolyPasswordHasher-enabled database very hard to breach, even for an adversary with millions of computers." - products: "PPH is used in several projects, including the Seattle - Clearinghouse and BioBank. PPH has implementations available in seven languages, - including Java, Python, C, and Ruby. Easy to integrate PPH libraries, such as the Pluggable Authentication Module (PAM), are also available for a number of operating systems, including Linux and OS X." - people: - - *santiago_torres - - *justin_cappos tags: - *cryptography - *security @@ -1169,21 +1003,6 @@ projects: (and acquire) computing resources from their desktop, laptop, or smartphone in the same manner as with cloud computing. Seattle is used by educators, and for software development and research by thousands of people around the world." - products: "Seattle is used by thousands of developers and has been - installed on tens of thousands of devices. Our website contains - information and links to educational - modules, and a clearinghouse - of available resources for those who wish to download and use the - Seattle program, or to donate - some computing power on their device for research purposes." - people: - - *albert_rafetseder - - *lukas_puhringer - - *sebastien_awwad - - *justin_cappos tags: - *security - *networking @@ -1202,22 +1021,6 @@ projects: Sensibility also has additional security protections that ensure the safety of the device, while giving researchers access to unique information." - products: "We have had four years of hack-a-thons, where teams compete - to build the best application for Sensibility. Install - our Android app or learn more by visiting our project's - blog!" - people: - - *albert_rafetseder - - *yanyan_zhuang - - name: "Yu Hu" - - name: "Richard Weiss" - link: "http://evergreen.edu/directory/people/weissr" - - name: "Leon Reznik" - link: "https://www.cs.rit.edu/people/faculty/lr" - - *lukas_puhringer - - *justin_cappos tags: - *testbeds - *privacy @@ -1235,21 +1038,6 @@ projects: created by the misunderstanding of APIs by developers. In particular, we are looking for security-related blind spots in popular Java and Python APIs as a way to more holistically find and address bugs." - products: "We are in stealth mode! If you want to be contacted when we - publicly release, please email blindspots@googlegroups.com." - people: - - *justin_cappos - - *lois_delong - - name: "Daniela Oliviera (UF)" - link: "http://www.daniela.ece.ufl.edu/Home.html" - - name: "Eliany Perez (UF)" - - name: "Sajidur Rahman (UF)" - - name: "Natalie Ebner (UF)" - link: "http://www.psych.ufl.edu/~ebner/" - - name: "Tian Lin (UF)" - - name: "Yuriy Brun (UMass-Amherst)" - link: "https://people.cs.umass.edu/~brun/" tags: - *software_engineering @@ -1269,22 +1057,6 @@ projects: a network model, syscalls that deviate from expected network semantics can be identified. In return, these deviations can be mapped to a diagnosis by using a set of heuristics." - products: "NetCheck identified a wide array of networking bugs in - different projects, including in Python. The code for NetCheck - is available. However, it is worth reading our blog first to learn - about our experiences." - people: - - *yanyan_zhuang - - name: "Eleni Gessiou" - - name: "Steven Portzer" - - name: "Fraida Fund" - - name: "Monzur Muhammad" - - name: "Ivan Beschastnikh (UBC)" - link: "https://www.cs.ubc.ca/~bestchai/" - - *justin_cappos tags: - *software_engineering - *testing_tools @@ -1302,13 +1074,6 @@ projects: potentially could be requested, the user's preferences—and any assumptions that could be deduced from those preferences—remain hidden." - products: "The prototype code for this project is available at its github - repository." - people: - - *luqin_wang - - *trishank_kuppusamy - - *justin_cappos tags: - *security - *cryptography @@ -1329,17 +1094,6 @@ projects: such as firewalls, multiple Antivirus scanners, IDSs, and IPSs. However, VSN can guarantee lower costs for management, and better performance for its end users." - products: "This patented technique - and its source code are available on the project web site." - people: - - *sai_peddinti - - name: "Keith Ross" - link: "http://engineering.nyu.edu/people/keith-w-ross" - - name: "Nasir Memon" - link: "http://engineering.nyu.edu/people/nasir-memon" - - *justin_cappos tags: - *security - *networking diff --git a/_includes/comments.html b/_includes/comments.html deleted file mode 100644 index 822ebfdc..00000000 --- a/_includes/comments.html +++ /dev/null @@ -1,21 +0,0 @@ - -
- - - diff --git a/_includes/header.html b/_includes/header.html index d2c9c407..84f408cc 100644 --- a/_includes/header.html +++ b/_includes/header.html @@ -23,9 +23,8 @@ diff --git a/_layouts/article.html b/_layouts/article.html deleted file mode 100644 index b756f3f7..00000000 --- a/_layouts/article.html +++ /dev/null @@ -1,28 +0,0 @@ - - -{% include head.html %} - -
- {% include header.html %} - -
-
- {% if page.title %} -

{{ page.title | size_upcase }}

- {% endif %} - {{ page.date | date: "%Y-%m-%d" }} · Posted by: {{ page.author }} · Categories: {{ page.categories}} · Comments
- - {{ content }} -
-
- - {% include footer.html %} - - - -{% include scripts.html %} - - - diff --git a/_posts/2017-03-20-first-blog-post.md b/_posts/2017-03-20-first-blog-post.md deleted file mode 100644 index a71d4c95..00000000 --- a/_posts/2017-03-20-first-blog-post.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -layout: article -title: Why Does Our Lab Need A Blog? -subnav: blog -comments: true -tagline: "Ideas do not always come in conference paper-sized chunks. Sometimes they are too small to fill a paper. Other times they find themselves cut loose from a paper that did not have enough pages to discuss everything. And, in..." -author: 'Justin Cappos' -categories: - - 'Informational' - ---- - -Ideas do not always come in conference paper-sized chunks. Sometimes they -are too small to fill a paper. Other times they find themselves cut loose -from a paper that did not have enough pages to discuss everything. And, in -other instances, the ideas may be too specialized, practical, or -speculative to be of interest to academics. - -This blog provides a home for ideas like the ones described above. It -offers a less formal vehicle for those of us in the Secure Systems Lab to -share more of our ideas with the world. These ideas could become the seeds -for new research paths. More likely, they will help us fulfill our core -goal of [solving problems in practice](/personalpages/jcappos/philosophy.htm), -by presenting a platform for us to discuss our success, failures, and -findings in transitioning our research into practical use. - -So, this blog, which will feature new posts about once a week, will talk -about topics like: - -1. How we are helping a project to mature -2. Why a project had a setback and what we learned from that problem -3. What we are ready to share about a new project -4. What we think about pressing technological issues - - -We hope you enjoy reading our blog and welcome your comments! - -Justin Cappos (on behalf of the Secure Systems Lab) diff --git a/_posts/2017-03-27-SHAttered-blog-post.md b/_posts/2017-03-27-SHAttered-blog-post.md deleted file mode 100644 index 7a819045..00000000 --- a/_posts/2017-03-27-SHAttered-blog-post.md +++ /dev/null @@ -1,31 +0,0 @@ ---- -layout: article -title: "SHAttered: Not All It's Cracked Up to Be" -subnav: blog -comments: true -tagline: "NYU graduate student Santiago Torres-Arias has a one-word message for those who might be shaken up by Google’s February 23 announcement of \"the first practical technique\" for attacking systems based on SHA-1, and that word is \"relax.\"" -author: 'Lois Anne DeLong' -categories: - - 'Informational' ---- - -NYU graduate student Santiago Torres-Arias has a one-word message for those who might be shaken up by Google’s February 23 announcement of “the first practical technique” for attacking systems based on SHA-1, and that word is “relax.” While acknowledging that the attack, carried out by researchers at Google and the CWI Institute in Amsterdam, represents an “important milestone for the history of cryptographic hash algorithms,” he was quick to point out that just because the attack CAN be done, it is unlikely to become the path of choice for malicious parties looking to bring down systems secured by cryptographic hash algorithms. - -The SHAttered attack, documented in [a technical article](https://shattered.io/static/shattered.pdf) written by researchers Marc Stevens, et al., seems to negate conventional wisdom that breaking SHA-1 required expenditures of too much time, computational power and money. SHA-1 is a hash algorithm, developed by the National Security Agency and published by the National Institute of Standards and Technology, that can generate a “fixed length fingerprint,” as Torres-Arias labels it. The presence of these unique fingerprints serves as proof that a file has not been been tampered with, and the inherent difficulty of creating two different files with identical SHA-1 numbers has served as an effective deterrent. As a result, for several decades SHA-1 was the de facto standard for use in secure signing systems. - -Torres-Arias, who shared his comments in an informal interview this week, does not deny the significance of the SHAttered attack, given that “it was the first such attack that was not theoretical in nature.” Despite the emergence of less brittle alternatives—-spurred on by studies in 2005 that first suggested that SHA-1 could be broken—-the algorithm continues to be used in a number of prominent settings. One such setting is the popular Git version control system. If the algorithm can now be broken in practice—thanks to the Google/CWI researchers’ ability to alter a PDF document without affecting its SHA-1 number, thus making it possible to create collisions-—it does increase the threat to these holdovers. Pointing out that, “it took us 12 years to go from a warning to be able to hash two files and get the exact same value,” and with further research “attacks are only going to grow more effective,” Torres-Arias is quick to add that “there is no reason to continue using SHA-1 for newer applications.” - -So, why then, is Torres-Arias suggesting that the SHAttered attack is not something to lose sleep over, or indeed, perhaps not even, as [other writers](http://searchsecurity.techtarget.com/news/450413667/SHA-1-deprecation-more-important-after-hash-officially-broken) have suggested, “the last nail in the coffin” for the use of SHA-1 in existing systems like Git? The researcher, who has spent a lot of time studying potential weaknesses in Git, and even documented one such flaw (unrelated to its use of SHA-1) in [a presentation](https://ssl.engineering.nyu.edu/papers/torres_toto_usenixsec-2016.pdf) at the Usenix Security Conference last year, points to four mitigating factors to consider before hitting the proverbial panic button. - -**Point 1: Since SHA-1 was already known to be “broken,” its use is already in decline.** -Though there are a few systems that continue to use SHA-1, Torres-Arias emphasizes that NIST itself deprecated the system six years ago, so, in a sense, the SHAttered findings come as no surprise. In the case of Git, he explained, the decision was made back in 2005 that, because of backwards compatibility, SHA-1 would not be replaced because it would be too costly to do so. That doesn’t mean these systems are completely ignoring the warnings. As Torres-Arias points out, “there are already works in the making to both harden Git’s use of SHA-1 and replace the hashing algorithm,” adding to any would be hackers, “don’t get your hopes up.” - -**Point 2: The SHAttered attack requires a particular confluence of factors to fall into place in order to work.** A whole sequence of things would need to occur for this attack to be successful, starting with the resources to stage the collision. Google’s own announcement points out that, even though their attack was more than 100,000 times faster than a brute force attack would be, it still took 9 quintillion SHA-1 computations to complete. Torres-Arias equates the financial costs of such an effort to “two years of your life and four semesters tuition at NYU.” And, afterwards, “you still would need to trick the system into the accepting the malicious file, and create a random file that won’t break,” which according to Torres-Arias, is pretty difficult in its own right. The attack uses a PDF file because, “it will tell the parser to ignore the random junk holding the collision blocks,” while a Word document or other file types would not. - -**Point 3: The collision is only the beginning.** “Even if you do get as far as the collision, you still need to break into the server, and change the benign file for a malicious one,” Torres-Arias says, adding “then you have to hope that people don’t notice the switch, and that nobody uploads a newer version that could supercede it.” - -**Point 4: There are other attacks that can do the same damage with much less effort.** Torres-Arias ticked off a number of attacks that are simpler to do, and much more effective, including submitting a patch that has a back door, opening a man-in-the middle connection, or breaking into a version control system, none of which requires the same combination of computing power, money, and excellent timing to put all the pieces in place. - -In short, while the SHAttered attack is worthy of attention, this particular threat is better filed under “more-interesting-than-scary.” - -You can read Torres-Arias’ full analysis of the SHAttered attack on his own [blog site](https://sangy.xyz/blog/). diff --git a/_posts/2017-04-04-uptane-blog-post.md b/_posts/2017-04-04-uptane-blog-post.md deleted file mode 100644 index ec7eadbc..00000000 --- a/_posts/2017-04-04-uptane-blog-post.md +++ /dev/null @@ -1,41 +0,0 @@ ---- -layout: article -title: Driving Forward. How a Big Idea Begins its Journey Towards Marketplace Acceptance -subnav: blog -comments: true -tagline: "So, you’ve come up with a really cool idea, discussed it with some knowledgeable people in order to frame that idea for a specific market, and written a program that is now attracting attention..." -author: 'Lois Anne DeLong' -categories: - - 'Uptane' - ---- - - -So, you’ve come up with a really cool idea, discussed it with some knowledgeable people in order to frame that idea for a specific market, and written a program that is now attracting attention both within the designated industry and among the public at large. You’ve demonstrated and discussed the project with a broader representation from the target audience and used their feedback to make sure your proposed solution will work within the unique parameters of that industry. Finally, you have thoroughly documented the design of the project, along with detailed guidance for implementation and deployment. And, you’ve managed to do all of this in just about a year’s time. - -The above offers a rough description of the evolution of a project called Uptane, a secure system that borrows state of the art technology from software repositories and applies it to reducing the risk of updating software on automobiles. In January of this year, a research team from the Secure Systems Laboratory, along with their colleagues from the University of Michigan Transportation Research Institute (UMTRI) and the Southwest Research Institute, announced the arrival of Uptane, and it did not go unnoticed. The open source software project gained a fair amount of media attention since its launch, including placements in such high-profile outlets as Forbes, Christian Science Monitor, Reuters, and National Public Radio . - -Now, three months after all the hoopla, the Uptane research team is focused on moving the technology from a cool idea on paper to a strategy that can be widely accepted in an industry that can be cautious about change. This hesitation is completely understandable. A car is an expensive proposition, and the cost does not stop at the cost of manufacture, nor at the consumer price tag. Because cars and trucks are used by millions of people in almost every corner of the world on a daily basis, the industry is very aware of the lives that could be lost if a malicious program was to negatively alter the operation of one or more of its moving parts. - -So, what will it take to take Uptane into the mainstream? We had a conversation with a few members of the research team, and based on their comments, here are few anticipated next steps to move the project forward. We’re posting this blog now in recognition that the status of this project has changed, in SSL’s view, from one that is “sprouting” to one that has had some adoption. - -**Step 1: Open the code to the community** -The day Uptane was formally introduced in New York City, its developers also issued a challenge to the security community: take our code and do your best to break it. While this might seem the exact opposite of what you would anticipate at a product launch, it does reflect a commitment to transparency that has been a hallmark of Uptane from the start. - -“There are no secrets here,” observes Trishank Karthik Kuppusamy, a Ph.D. candidate and the SSL team member who lead the specification for Uptane. “That has been the plan from day one,” he asserts, adding that every stage of the program’s development has been open to the scrutiny of workshop participants, or the industry representatives who participated in an online discussion forum. Releasing the code for further examination through its [web site](https://github.com/uptane/uptane), and encouraging any and all interested parties to put it to the test, was merely the next logical step. - -Beyond just continuing the pattern of transparency, Kuppusamy also sees three distinct benefits to be reaped from this call to white hat hackers. “First,” he says, “it assures the auto industry that, in developing our design, we are not favoring one particular manufacturer or supplier. Second, we believe that all scientific knowledge is actually improved by criticism. And, lastly, by offering our code now, it gives the community a chance to break the design, rather than just one implementation of that design. This allows for feedback on a much deeper level, and can help us identify flaws at the design level.” - -While there have been some questions and suggestions to date, no serious design flaws have been found. Submissions are still being welcomed by members of the white hat hacking community. - -**Step 2: Promote use of Uptane’s inherent flexibility as a path to adoption** -Uptane was designed to meet the unique needs of the auto industry, one of which is being able customize vehicle designs to meet specific end uses. How manufacturers and suppliers work with that flexibility, and whether the current design resources provided by the Uptane team will be sufficient for the demands of customization, will probably be major influences on how the project proceeds. - -So far, two suppliers for the auto industry have adopted and adapted Uptane technology for use in commercial applications. One of these suppliers, Kuppusamy observes, “came to us through the forum,” and, “without a lot of direct communication with us” was able to integrate Uptane into its own technology. “I would say this shows we are at the point where people can build their own version of Uptane,” he asserts. - -Though “build your own Uptane” is a legitimate way to grow support for the technology, Dr. Andre Weimerskirch, vice president in charge of cyber security for e-systems at Lear Corporation, thinks acceptance of the technology lies in the hands of the major manufacturers. Moving forward, he observes, will require “having at least one car maker behind us...one major manufacturer to say ‘yes.’” Dr. Weimerskirch is in a unique position to make such a judgement, since he served as Uptane’s principal investigator at UMTRI during the early days of the project and now participates from the industrial side. As such, he also points to another key step to acceptance, creating “a mindset of everyone using a well understood and superior security framework” to software update security on vehicles. - -**Step 3: Position the technology towards standardization** -This idea of a shared mindset could ultimately lead to standardization, and Dr. Weimerskirch acknowledges that the Uptane team would ultimately like to see this occur. However, he cautions that approaching a standards organization, such as SAE, IEEE, or ISO, should not be done prematurely. First, he suggests, the research group needs to understand the proper scope, community, and objective to approach the right organization. Second, he believes that what is presented to the organization must be stable and unlikely to change. “Right now, he notes, Uptane is working on such a specification and, once it is at a stable stage, so we are basically handing them a finished product,” then the time will be right. But, even then, he notes that, “it would be good to include someone who has a history of specifying standards, and is known for being reliable and reasonable.” - -Ultimately, Uptane will continue to move forward because the need for such a secure software update system will only grow. “We are in a race to see if we can secure the software update process for cars before hackers break in,” says Justin Cappos, principal investigator for Uptane at NYU’s Tandon School of Engineering. “We’re not resting on our laurels with this project, because we can’t.” diff --git a/_posts/2017-04-10-netcheck-blog-post.md b/_posts/2017-04-10-netcheck-blog-post.md deleted file mode 100644 index 2806bd97..00000000 --- a/_posts/2017-04-10-netcheck-blog-post.md +++ /dev/null @@ -1,138 +0,0 @@ ---- -layout: article -title: Where the Rubber Meets the Road. Lessons learned from NetCheck -subnav: blog -comments: true -tagline: "In 2014, Eleni Gessiou, Yanyan Zhuang, -Justin Cappos, and four of their students introduced a new diagnostic tool -called..." -author: 'Justin Cappos' -categories: - - 'NetCheck' - ---- -In 2014, Eleni Gessiou, Yanyan Zhuang, -Justin Cappos, and four of their students -introduced a new diagnostic tool called -[NetCheck](https://netcheck.poly.edu/projects/project) designed to detect the -causes of failure in networked applications. What set this tool apart from other -fault diagnostic tools is that it could pinpoint the cause of failure, even -if little was known about the network or the application itself. The system -functioned “by simulating a set of system call (syscall) invocation traces -collected at runtime using standard blackbox tracing tools, and running them -against a network model,” Zhuang explained. - -Initial findings about the tool were quite positive. In a -[paper](http://www.cs.ubc.ca/~yyzh/nsdi14netcheck.pdf) presented that -year at the Usenix Networked Systems Design and Implementation (NSDI) -conference, the research team noted that, -when run on traces that reproduce faults identified in 30 popular open-source -projects, NetCheck “correctly diagnosed over 95%” of the bugs.” The paper -also reports that NetCheck was “able to diagnose problems in such popular -applications as Skype and VirtualBox.” - -Following its introduction, a few practitioners tried out the tool in a number -of different contexts. One major financial company employed NetCheck to -debug issues in their setup, but released little information publicly about -their experiences. However, the [Seattle](https://seattle.poly.edu/html) project, used NetCheck to -help to diagnose network issues in its own deployment. Through this experience, -some of NetCheck’s limitations and unstated assumptions became apparent. -Cappos recently elaborated on these limitations, and explains why the biggest -challenges in implementing the tool are ones that the lab alone can not resolve. - -1. **[Big hurdle] Acquiring traces is non-trivial for many deployment -environments:** Collecting system call traces has a few challenges of its own. -“First, for many deployment environments, it requires a fair amount of -know-how,” Cappos explains. “A cloud provider would need to trace the specific -thread that serves a user’s connections. Due to load balancers and/or failures, -the user’s requests may be handled by different machines at different times, -which may make this tracing difficult, to put it mildly.” Furthermore, since -tracing is done on a per-thread or per-process basis, many traces on a server -will incidentally capture requests and other information from other users, -thus creating some significant privacy issues. - * *Potential remediation.* This fundamental problem can not be remediated by SSL -and thus prevents NetCheck from working for distributed applications in the -cloud. However, within these limitations, server administrators certainly -could use it to debug things within their own environments. If the user -sends the server administrator the network trace, that would also work well. -NetCheck is also useful for peer-to-peer software. - -2. **[Big hurdle] On many operating systems, system call tracing tools yield -incomplete or less than accurate results:** One consequence of this is that traces -from some OSs (such as Mac and Windows) often lack all of the elements NetCheck -requires for analysis. For example, dtrace (on Mac) will not capture the -SO_REUSEADDR flag’s setting. Instead, it captures the memory address where -this flag is stored. As a result, many important parts of the trace would be -omitted. - * *Potential remediation.* To address this issue, “OS vendors would need to -build more accurate tools,” Cappos asserts. “This is a fundamental issue that -is difficult for us to surmount since, in some cases, it would requires kernel -access to non-open source OSes.” - -3. **[Medium hurdle] Traces are not uniform across operating systems:** Further -complicating things, traces can differ substantially on different OSes. -Experience has shown that system call tracing tools may record different -information even when executing ‘POSIX’-compliant programs. “As it stands -now we would need to build one tracing tool per system to reliably access -all the trace elements required,” Cappos says. “Since the idea behind the -development of the tool was for it to be useful across multiple operating -systems, substantially more effort would be needed to make and test the tool -in different environments.” - * *Potential remediation.* A former NYU student, -[Savvas Savvides](https://www.cs.purdue.edu/homes/ssavvide/), built a -[parser](https://github.com/ssavvides/posix-omni-parser) that is meant to -abstract away OS differences in traces. This parser is currently being used in -our on-going work with CrashSimulator, but, -it would need to be much more complete to address this issue effectively. -As an extreme example, working with -Windows (via the WindowsAPI) would require re-engineering almost the entire -set of system call interactions in NetCheck. - -4. **[Medium hurdle] The classification of errors was too broad to be -practical:** “NetCheck works by looking for error patterns that can be classified -by the presence of bugs or evidence of specific types of behavior. The initial -study documented in the paper looked at 70 traces, and at the time we -published, we thought we had done a good job,” Cappos states. But, often only -one or two of those traces focused on a specific situation of significance, -such as NAT traversal or using a VPN. As such, when using NetCheck in other -scenarios, “we found we had overfit for our initial test set. Thus, the high -level output we received from the classifier was not as useful as we had -originally expected.” -* *Potential remediation.* More data would be very useful in improving the -classification. With sufficient practical effort, we feel it is likely we -could do a better job of categorization. However, as it now stands, -NetCheck’s results are “too broad or too vague, and would require too much work -from a much larger dataset than could be reasonably obtained,” Cappos affirms. - -5. **[Small hurdle] In practice, collection would have many incomplete traces:** -System call tracing for large running applications would be unlikely to start -when the server begins running. The reason is that servers often run over very -long periods of time to handle user requests. While it is possible to begin -tracing a running application, NetCheck would need to be -modified to do so. Uncertainty about what issues might have occurred -before the start of the trace could cause additional errors in diagnosis. -* *Potential remediation.* “This seems more like an implementation detail at -first glance,” Cappos notes, “but without accounting for it, it isn’t clear if -there may be research problems lurking here. If NetCheck were more widely -used, we would explore this area further.” - -6. **[Small hurdle] Co-locating traces is a substantial challenge in some -environments:** Even if you can get different parties to agree to acquire and -share traces, locating these (potentially large) files on the same system to -run the analysis is time consuming. Less effective tools, such as ping and -traceroute, require much less effort. As such, NetCheck is mostly useful -for specialized debugging by moderately skilled users. -* *Potential remediation.* N/A. - -Though he still calls it “an appropriate idea for a research paper,” Cappos -recently decided to stop trying to transition NetCheck into practical use, -and retired it from the SSL’s active project roster owing to the many problems -cited above. Yet, even as NetCheck recedes to the archives, Zhuang makes -the case for its importance in opening an important new avenue for research. -By pointing out that “people make assumptions when they code,” Zhuang contends -that there is a direct connection from the ideas in NetCheck to two current lab -initiatives: Atoms of Confusion and -API Blindspots. Both projects deal with -understanding how human perceptions and actions in writing and understanding -code can influence code quality. In this indirect manner, NetCheck’s potential - may finally come to fruition. diff --git a/_posts/2017-04-17-Expo-blog-post.md b/_posts/2017-04-17-Expo-blog-post.md deleted file mode 100644 index 5b190b56..00000000 --- a/_posts/2017-04-17-Expo-blog-post.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -layout: article -title: Coming Attractions. PPH and CrashSimulator at the Expo, April 21 -subnav: blog -comments: true -tagline: "The Research Expo at the NYU Tandon School of Engineering is a -showcase where students and faculty alike can share their current research -activities with the greater academic..." -author: 'Lois Anne DeLong' -categories: - - 'PolyPasswordHasher and ' - - 'CrashSimulator' ---- - -The Research Expo at the NYU Tandon School of Engineering is a showcase where -students and faculty alike can share their current research activities with -the greater academic community, and with the general public. The event is -held outdoors and is designed to be interactive and enjoyable for people -from all levels of scientific and technological expertise. - -This year the Expo will be held on April 21 from 1 to 4:00 p.m. -Among the 60-plus exhibits—featuring work drawn from every academic department -at the school—will be presentations on two sprouting projects from the -Secure Systems Lab. Undergraduate researcher Shuyuan Luo -and Ph.D. candidate -Santiago Torres-Arias will present their -ongoing work on [PolyPasswordHasher](https://polypasswordhasher.github.io/PolyPasswordHasher/) -(PPH), a secure password storage system. In addition to a basic introduction -to PPH, they will be spotlighting their work on -[PAM](https://github.com/LolalyLuo/PolyPasswordHasher/tree/PPHPAMModule)(Pluggable -Authentication Module). Developed last summer, and already widely used -in both Linux and OS X systems, PAM offers operating systems a -simple way to “plug in” to the enhanced security benefits of PPH, -without requiring any extensive modifications. - -The other SSL representative presenting at the Expo will be second year Ph.D. -student Preston Moore, who will be sharing -his work on CrashSimulator. -This tool replicates “real-world” testing for new and upgraded software -without the complications of “real-world” deployment. - -The Expo is free, but attendees are asked to [register](https://www.eventbrite.com/e/2017-nyu-tandon-school-of-engineering-research-expo-general-public-viewing-registration-33122091066). -A full list of exhibitors is available at the [event web page](http://engineering.nyu.edu/events/2017/04/21/2017-nyu-tandon-school-engineering-research-expo). - - - diff --git a/_posts/2017-04-24-DockerCon.md b/_posts/2017-04-24-DockerCon.md deleted file mode 100644 index f4216afb..00000000 --- a/_posts/2017-04-24-DockerCon.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -layout: article -title: "Notes from DockerCon 2017" -subnav: blog -comments: true -tagline: 'This week several members of the lab went to DockerCon 2017 to learn about some of -the exciting new things happening in the Docker eco-system. We also gave a -talk on TUF...' -author: 'Justin Cappos' -categories: - - 'TUF, ' - - 'in-toto, and ' - - 'lind' ---- - - -This week, several members of the lab went to DockerCon 2017 to learn about some of -the exciting new things happening in the Docker eco-system. We also gave a -talk on TUF, focusing on it's integration into -Notary, one of the key -security systems in Docker. - - -There are many amazing things happening in the Docker ecosystem. -However, three main things stuck with me, and I'd like to highlight them here. - -**Securely distributing secrets.** A major problem that arises in cloud -systems is how to insert secret information into a new VM or container -instance when you start it. For example, how do you get your configuration -file with the database's password into your cloud system? If you put it in -your image, as some developers accidentally do, attackers can read out -these secrets. - -Docker's security team has been extending Notary's implementation of TUF to -ensure the secure distribution of secrets. Their technique securely maps the -secret into the file system of the worker container. Even though the secret -is mapped as a file, it is only stored in memory and is never -written to disk. The Docker team has added a lot of clever -design aspects to make this possible. I'm sure I'll be -talking in more detail about this new technique in a future post. - -**Moby makes it easier to customize Docker.** One of the highlights from -the keynote was that Docker will be making it much easier to replace bits -and pieces of their infrastructure. That is, the company has made it much easier -to pull out bits of -Docker so they can be repurposed. This change also makes it easier to build -a custom component and insert it into Docker. This is particularly -interesting for us because of our interest in integrating technologies -like Lind into container systems. - -**Software supply chain.** One of the key things our in-toto project has been focused on over -the past few years is securing software projects early in the development process. -A number of companies represented at DockerCon are clearly starting to think in -this direction as well. Evidence of this new mindset could be seen in the amount of -activity dealing with scanning -containers to ensure that they do not contain libraries with known -vulnerabilities, as well as assertions that the CI/CD process was run on the -software. - -This certainly indicates to us that we are focusing on a -pressing and important problem. However, we can provide much more -holistic security, starting with the VCS or even the editor. We also -support much of the supply chain verification performed on software that goes into -containers. We've had some discussions with people on Docker's security -team and with the security groups of a few other major projects. The -response to the added benefits and additional rigor of in-toto has been -very positive. Now we will start to work with them toward full -integration of in-toto in these products. - diff --git a/_posts/2017-05-01-SensHack.md b/_posts/2017-05-01-SensHack.md deleted file mode 100644 index fe169194..00000000 --- a/_posts/2017-05-01-SensHack.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -layout: article -title: "All in One Day’s Work: Fourth Sensibility Testbed Hackathon" -subnav: blog -comments: true -tagline: 'For four years, the Sensibility Testbed project has thrown down a gauntlet -to students——in less than one day, use the platform to design and test a -new sensor...' -author: 'Yanyan Zhuang' -categories: - - 'Sensibility' - ---- - -For four years, the Sensibility Testbed project has thrown down a gauntlet -to students——in less than one day, use the platform to design and test a new sensor -application that can work on smartphones and tablets. We hosted our latest -Sensibility Hackathon, on March 14, 2017 at Rowan University in Glassboro, NJ, -in conjunction with the IEEE Sensors Applications Symposium -[(SAS)](http://2017.sensorapps.org/). Five student -teams participated in the event, with Claudio Crema -from the University of Brescia, and Majed Alowaidi from the University of Ottawa -receiving the top prizes of new Android phones. Crema and Alowaidi were -recognized for an app that scans nearby WiFi networks to identify -the access router with the best signal quality, and then suggests that router to -the user. - -The competition started with a one-hour tutorial on how to use Sensibility -Testbed, and then participants had a little more than four hours -(subtracting time for a lunch break) to design -their app before introducing it in a five minute “elevator pitch” to other -workshop attendees. Apps were demonstrated at the Awards ceremony during the -conference banquet, and judged for their “impact on society, and the completeness -of implementation.” In addition to the phones awarded to the winners, the top -three teams were also given certificates. - -The Hackathon has become a mainstay of the Sensor Application Development -Workshop, held in conjunction with the IEEE Sensors Applications Symposium (SAS). -Part of the challenge of this competition is that most participants have little -or no knowledge of Sensibility Testbed. The fact that the teams are able to work -effectively with the program with little training is a testament to our claims -that Sensibility is easy to use. - -This year’s participants benefitted from a simplified installation and set-up -procedure for the [Sensibility app](https://github.com/SensibilityTestbed/instructions). -It now takes just a few simple steps to get the program running. -The only challenge we observed was that many of our Windows laptop users had to -install and quickly learn how to use Python,the language in which Sensibility -code is written. However, this did not appear to be a significant stumbling block. - -Feedback on the Hackathon was very positive, with a number of participants -expressing that the program was well prepared,and that most attendees enjoyed -the demos at the Awards ceremony. Many thanks to Sensibility team members -Justin Cappos,Lukas Pühringer, and Albert Rafetseder who helped to run the event. - -We look forward to the fifth year of the Hackathon, which will be held in -Seoul, South Korea in March 2018. - - - diff --git a/_posts/2017-05-08-uptane-demo.md b/_posts/2017-05-08-uptane-demo.md deleted file mode 100644 index 1ebf928d..00000000 --- a/_posts/2017-05-08-uptane-demo.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -layout: article -title: "Demonstrated defense: Uptane takes a test drive" -subnav: blog -comments: true -tagline: 'While the Uptane group continues to invite white hat hackers - to “break our system” before malicious parties attempt to do so for real, -several of the developers..' -author: 'Lois Anne DeLong' -categories: - - 'Uptane' - ---- - -While the Uptane group continues to [invite white hat hackers](http://engineering.nyu.edu/press-releases/2017/01/18/call-issued-white-hat-hackers-find-flaws-new-automotive-software-updater) to “break our system” before malicious parties attempt to do so for real, -several of the developers behind the project decided to put on their black hats -and test the defenses themselves. To let potential users see how Uptane—a -secure update framework for automotive computing units—works, -Sebastien Awwad -and Vladimir Diaz, developers with NYU’s Secure -Systems Laboratory, have prepared and posted a demonstration on YouTube. The 13 minute presentation, -which can be [accessed here](https://www.youtube.com/watch?v=Iz1l7IK_y2c&feature=youtu.be) -shows how Uptane provides resilience against six different, and increasingly -malicious simulated attacks—a small sampling of the many threats against which -Uptane can defend. - -Awwad provides a brief overview of the system, then conducts and narrates a -normal update. This is followed by the attacks, which begin with a “very basic” -man-in-the-middle attack on the Director Repository, a live repository of -instructions for each vehicle. Attacks are also conducted against the system's -Image Repository, a more static repository of available updates, as well as on -both simultaneously. The attacks escalate from there, until they conclude with -a demonstration of how OEMs or suppliers can bring repositories back after a -major key compromise, and prevent harm stemming from the use of compromised -keys by malicious actors. - -Within the demo, two Raspberry Pis are used as stand-ins for primary and -secondary clients in an automobile, Electronic Control Units (ECUs) -such as the infotainment ECU and a transmission control unit. A monitor -displays the web-based front-end for the central services that assign updates -to vehicles. - -The Uptane project provides a mechanism to securely distribute software updates -to cars, thus avoiding the comprehensive array of security attacks that can -attack critical systems. For additional information on Uptane, -[go to its web site](https://uptane.github.io/). diff --git a/_posts/2017-05-15-conex.md b/_posts/2017-05-15-conex.md deleted file mode 100644 index 375b7c48..00000000 --- a/_posts/2017-05-15-conex.md +++ /dev/null @@ -1,89 +0,0 @@ ---- -layout: article -title: "TUFening UP Conex" -subnav: blog -comments: true -tagline: 'I spent this week with Hannes Mehnert figuring out how best to secure -Conex, a TUF-like system for the OCaml community. We spent quite a bit of time -pouring over the Conex proposal...' -author: 'Justin Cappos' -categories: - - 'TUF' - ---- - -I spent this week with Hannes Mehnert figuring out how best to secure -[Conex](https://hannes.nqsb.io/Posts/Conex), a TUF-like system for the -[OCaml](https://ocaml.org/) community. After quite a bit of time spent pouring -over the Conex proposal, we eventually concluded the best way to secure this -system was to leverage the delegation design from [TUF](projects/#tuf), -but with added functionality that can support more complete decentralization. -Delegations in TUF let a party that is trusted to perform an action grant that -ability to another party. - - -A lot of interesting observations emerged from our discussions, of which -I would like to share three. - -First, the TUF documentation really needs to be improved, especially when it -comes to roles. I believe Conex's first design may have used TUF if -we had provided clearer descriptions of how to set up these roles in a -realistic repository. Providing a more complete description about how -different real world TUF repositories are set up will -facilitate adoption. This will be a point of emphasis in the coming weeks. - -Second, there are quite a few `underemphasized' design choices in TUF that -are important for security. For example, we do not discuss clearly enough why -it is important to have a repository retrieve the timestamp metadata before -conveying any other information about the state of the client. This is because -the repository is forced to say what set of snapshot metadata (and thus -all targets metadata) they will subsequently serve to the client. This -prevents an attacker from learning the state of the repository, and thus being -able to customize a replay attack to the specific client's metadata state. - -### Rotating keys - -The final obervation was about how to do key rotation. The model -for Conex / OCaml is to distribute trust as much as possible. Hence, it -is desirable to delegate to a threshold of developer keys rather -than to a single, centrally-stored project key that could be stolen. -Hannes and I discussed an interesting way to let a role rotate -its keys to another role without requiring any change in delegation for -that receiving role. Effectively, a key can sign a statement saying -``This key should not be trusted anymore, instead use this new key.'' -For example, foo's delegation to bar need not change for bar to -rotate their key. - -### Handling cycles of rotation - -One interesting side effect of this new mechanism is how to handle a ``rotation -loop,'' where a series of rotations create a cycle. In this case, we believe -the sane behavior is for all of those keys to be treated as revoked, and -for none to be trusted. So, if bar rotates their keys back to a previous key, -then all keys in the cycle, or that rotate to the cycle, are treated as invalid. - -### Explicitly revoking one's own key - -The nice feature of this is that a role now has an easy way to revoke trust -in its own key. The role simply rotates the key to itself (creating a cycle -of one). This effectively states, "Do not trust my key for any signatures in -the future." It is an option that can be used to let a user revoke trust in its -key without requiring the parties that delegate to it to be involved. - -### Rotation beyond targets roles? -This rotation primitive is also being considered for non-targets -roles. About six months ago, Docker's security team expressed strong interest -in key rotation for the timestamp role. We are eager to work with them to ensure -this proposal meets their needs / goals. This primitive would likely replace -the root rotation mechanism, preventing users from downloading all missing root -metadata (whether the root keys changed or not). It also would allow the -root roles on a repository to sign a statement revoking trust in themselves, -in case a threshold of root keys were compromised. This would mean that -any client that can connect to retrieve this metadata would stop trusting -that repository in the future. - -In closing, we are excited to continue this discussion via -[TAP 8](https://github.com/theupdateframework/taps/blob/tap8/tap8.md). We -are working together to move forward the deployment of TUF's security for the -OCaml community! - diff --git a/_posts/2017-05-22-gsoc.md b/_posts/2017-05-22-gsoc.md deleted file mode 100644 index d739669d..00000000 --- a/_posts/2017-05-22-gsoc.md +++ /dev/null @@ -1,31 +0,0 @@ ---- -layout: article -title: "Google Summer of Code -- Python Dependency Resolution" -subnav: blog -comments: true -tagline: 'This summer we are giving back to the Python community. I am -excited to be working with Donald Stufft to mentor Pradyun Gedam, while he -works on dependency resolution for pip...' -author: 'Justin Cappos' -categories: - - 'TUF' - ---- - -This summer we are giving back to the Python community. I am excited to -be working with Donald Stufft to mentor Pradyun Gedam, while he works on -[dependency resolution for -pip](https://gist.github.com/pradyunsg/5cf4a35b81f08b6432f280aba6f511eb). -This should help Python users install packages in situations where there are -conflicts between dependencies of different packages. - -Pradyun will doing this research work under the aegis of the Google Summer -of Code program. GSoC, as it is more commonly known, gives college student -programmers the opportunity to gain practical experience in open source -development, while receiving a stipend for their efforts. Pradyun will be -one of 1,318 GSoC students who will spend the summer working at one of -201 mentoring organizations around the world. - -Let the fun begin! - -Justin diff --git a/_posts/2017-05-29-SeattlED.md b/_posts/2017-05-29-SeattlED.md deleted file mode 100644 index 52ca61c6..00000000 --- a/_posts/2017-05-29-SeattlED.md +++ /dev/null @@ -1,124 +0,0 @@ ---- -layout: article -title: "Rubber Meets the Road: Classroom Lessons Learned from Seattle" -subnav: blog -comments: true -tagline: 'When Seattle first debuted in 2009, it brought to life several -exciting ideas. For starters, it demonstrated the potential of cloud technology -that could securely run on donated devices. It also gave educators..' -author: 'Lois Anne DeLong' -categories: - - 'Seattle' - ---- - -When [Seattle](https://seattle.poly.edu/html/) first debuted in 2009, it brought -to life several exciting ideas. -For starters, it demonstrated the potential of cloud technology that could - securely run on donated devices. -It also gave educators a powerful new way to show novice student programmers how -networks actually function. -In less than a decade, it has spawned a number of new applications, and has -taught thousands of students in 100 classrooms around the world concepts in -cloud computing, networking, distributed systems, and parallel programming. -Increasingly, Seattle has also provided an effective method for teaching -the basics of systems security. - -As its name may indicate, Seattle was initially developed at the University of -Washington, the brainchild of several researchers, including SSL director Justin -Cappos. The program offered researchers a way to “run code on any device and -safely share computer resources,”thus turning “any device into a cloud -provider,” Cappos explains. Developed at a time when cloud computing was just -coming to prominence, Seattle offered an easy-to-use platform -for tapping the potential of this technology. - -Recently, Cappos, and research professor Albert Rafetseder, an early adopter of -the technology, reflected on the evolution of this project into an effective -teaching tool. Cappos also shares some lessons learned along the way about the -best ways to introduce new teaching technologies. - -According to Cappos, Seattle’s initial set of adopters were intended to be -educators. Early on in its development, two decisions were made that shaped -the overall nature of the project. First, these testbeds were designed to be -easy to use, so developers with little experience could get up to speed quickly. -And, secondly, Seattle was to be a free “community project,” with interested -individuals and universities donating computational power, storage, and making -improvements to the Seattle software. In addition, it was designed agnostic of -any particular language or operating system, so it could be run over a wide -number of platforms. - -Support for the new project was generated in a somewhat unusual way. “In the -initial phase of the project, I gave 100 talks at schools around the country and -ran demos of Seattle for faculty and students. We built an audience for the -product that way.” As the talks drew in supporters, users, and a growing number -of donated devices, the research team decided to write a few “prepackaged” -assignments for use in networking classes. The inspiration for these assignments - was what was currently being taught. -According to Rafetseder, the Seattle classroom materials “developed naturally” -as a way to “show off how easily the platform enables you to do interesting -experiments.” This approach of "teaching something moderately complex by building -on simple examples,” has also allowed students a great deal of flexibility to use -Seattle for projects that match their interests. - -Exposing students to “real networks,” such as internet sites, and letting them -see “real world latency, firewalls, and other things that they would not -otherwise have hands-on experience with,” has been the primary benefit of -Seattle’s educational offerings. But, both Rafetseder and Cappos point to other -plusses of the software’s use in the classroom. Cappos observes that using the -program prevents students from “accessing and using existing libraries, which -students often use as crutches” when developing a program. Rafetseder cites how -it allows students to learn “how many things go wrong in computer networks all -the time, and how brilliantly smart yet relatively simple the algorithms that -govern the networks are.” -He adds that “it's great to let students experience that network protocol design -is tough,” and “that there is hardly ever an optimum set of parameters, or an -algorithm that is guaranteed to work under all circumstances.” - -When queried as to how Seattle is being used today, Cappos acknowledges that, -as an [open source](https://github.com/SeattleTestbed) project, “we don’t -always know who is using it unless people tell us.” All the necessary code can -be accessed through the Seattle website, -free of charge and, for users that do not use one of the Seattle clearinghouses, -Seattle can be used without the need for any type of registration. The -advantages of such an approach to universities and research communities are -obvious, but Cappos acknowledges that occasionally it has its issues. Recalling -one of the project’s earliest educational adoptions, he explains “there were -problems with the platform. -Unfortunately we didn’t get errors or complaints till much later, when the user -told us,‘this has been broken for a week.’ As a result of that encounter, we -added bettermonitoring, but even now, we don’t exactly know in how many, or in -what ways Seattle is being used.” When feedback is received, however, it is -generally favorable. - -As Seattle begins to close in on its first decade, Cappos notes there has been -something of a shift in which faculty are accessing the program, from networking -classes to system security classes. Seattle’s ability to create reference -monitors makes it increasingly appealing to faculty looking to offer their -students hand-on experience in security topics. Another shift that may grow in -the future is away from using Seattle on computers and towards smart devices. -Running Seattle, or its spin-off -project Sensibility Testbed, on smartphones and tablets “provides a direct way of -interacting with the device thanks to quick code turnaround times,” -Rafetseder states. -“Feedback is immediate. I think this is of great use for newcomers.” - -Asked to summarize the lessons learned from Seattle’s educational components, -Cappos shared the following observations: - -* **Know your audience.** “We never take the attitude that Seattle is a finished -product. -We have found it valuable to be able to look over someone’s shoulder to see how -they implement the projects.” - -* **Eat your own cat food.** ”We have used and built on the tools we created. -In our case, we used Seattle code to implement new tools and products. For -example, the code base in Seattle was used in Sensibility Testbed, which adds -the ability to safely and securely collect data from sensors on mobile devices.” - -* **Make it easy for others to fix your mistakes.** By presenting Seattle as an -open source project, “we have made it easier for people to collaborate with us.” -Cappos is genuinely proud of the fact that “more than 100 people have worked on -Seattle at one point or another through its history. We have viewed our role as -‘quality control.’ We allowed the program to scale, addressing problems as they -were called to our attention. In doing so, people are much more invested than -if they had simply purchased the product.” diff --git a/_posts/2017-06-05-Atoms.md b/_posts/2017-06-05-Atoms.md deleted file mode 100644 index 20a101a8..00000000 --- a/_posts/2017-06-05-Atoms.md +++ /dev/null @@ -1,82 +0,0 @@ ---- -layout: article -title: "Atoms of Confusion: Tracking the Tiny Causes of Programmer Misunderstanding" -subnav: blog -comments: true -tagline: 'Atoms of Confusion is a project designed to -understand the root causes of programmers’ misunderstanding of source code. -It is anchored in the idea that empirical software engineering research...' -author: 'Dan Gopstein' -categories: - - 'Atoms of Confusion' - ---- -[Atoms of Confusion](https://atomsofconfusion.com/) is a project designed to -understand the root causes of programmers’ misunderstanding of source code. -It is anchored in the idea that empirical software engineering research can be -at once rigorous, objective, quantitative, and insightful. To this end, the -project has core members and collaborators from diverse fields, including -psychology and neuroscience, as well as computer science. It builds on -traditions from software engineering research, and adds wisdom from more mature -scientific disciplines. Together, it offers a fresh view on what causes -misunderstandings in programming. - -Like most psychological phenomena, “confusion” is an intricate and complicated -topic. To ensure rigor throughout the project, the initial foundational -experiments have chosen an objective, measurable definition of confusion, and -have focused on the most minimal causes of such a state in programmers. -Confusion, as used in this work, describes what occurs when a programmer -believes source code will behave differently than it actually does when run on -a computer. Our goal is to find the smallest pieces of source code that can -cause this type of misunderstanding in programmers. - -The initial work, selected for publication in [ESEC/FSE 2017](http://esec-fse17.uni-paderborn.de/), focuses on the C programming language. C is natural fit since it is both -very popular, and very conducive to misunderstanding. C is so prone to -confusing programmers, that it has even spawned a contest (IOCCC - The -Obfuscated C Code Contest) to create the most deliberately difficult to read -small C programs. These “obfuscated” programs were inspected to find recurring -patterns that made them difficult to understand. In turn, an experiment showed -these extracted “atoms” of confusion to 73 programmers in order to empirically -measure exactly how confusing they were. Each participant was asked to hand -execute ~50 of the distilled confusing pieces of code, as well as equivalent -code snippets that had been transformed to use less confusing constructs. -Afterwards, a separate group of programmers were shown the original IOCCC -programs, both before and after the atoms of confusion had been transformed -away. These studies were designed to test (1) whether very small pieces -of code can cause programmers to misinterpret programs and (2) whether these -small patterns affect the understanding of larger programs. All evidence -so far points towards the potential for small patterns in code to cause -large gaps in understanding in programmers. - -Atoms of Confusion is among the first research efforts to explicitly investigate -patterns of this nature. Similar concepts have traditionally been addressed -in style guidelines and coding standards. However, those typically reflect -opinions and anecdotal observations. The results of the experiments conducted -as part of this project so far indicate that popular style guides for large -projects are often incomplete, and occasionally make recommendations that may -be harmful to programmer comprehension. - -This work is rooted in simple facts that can be scientifically validated. -The experiments performed for this project were designed to make as few -assumptions as possible. All hypotheses were stated in objective and -falsifiable terms. On this solid foundation, there is a plan to grow the body -of research to test more sophisticated theories. By using the data already -collected, and by growing the methodology, there are groups of questions -which can be addressed in new ways. In the future, this work will seek -answers to questions that can allow for automatic analysis of code bases -for comprehensibility hot spots. It can also seek answers to such questions as: -* Why programmers make the mistakes they do. -* How these code patterns generalize over different programming languages -* How the natural languages and cultures of programmers may affect the way -they react to these patterns - -The project offers a way to deepen our understanding of how languages do or do -not mesh in the minds of developers. Insights of this nature could not only lead -to fewer coding misunderstandings in global computing projects, but help to -evolve much more “human-friendly” programming languages and methodologies. - -With time, Atoms of Confusion can help inform the age old question of why -and how programmers write bugs. In the meantime, we welcome other researchers -to replicate our work or explore new questions using our data sets. All of our -raw study materials and data are available on our -[web site](https://atomsofconfusion.com/). diff --git a/_posts/2017-06-17-medical-hack.md b/_posts/2017-06-17-medical-hack.md deleted file mode 100644 index 7bff2c91..00000000 --- a/_posts/2017-06-17-medical-hack.md +++ /dev/null @@ -1,54 +0,0 @@ ---- -layout: article -title: "Medical Device Insecurity --- A Prescription For Disaster" -subnav: blog -comments: true -tagline: 'This past week I spent time in Finland working with medical device -experts. I talked to vendors, hospital IT personnel, and security -experts which helped me learn a lot...' -author: 'Justin Cappos' -categories: - - 'Upccinate' - ---- -This past week I spent time in Finland working with medical device -experts. I talked to vendors, hospital IT personnel, and security experts, -which helped me learn a lot more about the deployment environment and general use -models of these devices. I want to focus on one of the key parts of the event --- -a hack-a-thon. - -About 30 hackathon participants had the opportunity to break -one of five different medical devices. Of these, my hack-a-thon partner and I -spent a substantial amount of time working with one device. We were able to work -with others to fashion an exploit that gave us a root shell. We then used this -flaw to display a ransom-ware style message on the device's screen to -threaten the user. - -Even more frightening, we found that devices of that type can be discovered -and exploited remotely. At least in the configuration we tested, the -device attempts to receive instructions from a server over HTTPS. However, -there is no certificate checking! So, any man-in-the-middle attacker could -control this hostname and/or IP address, and, consequently, compromise all -devices of this type. - -There were a host of other issues with the device. We found it was -riddled with CSRF vulnerabilities, code injection faults, and -basic failures in logic, such as neglecting to check the old password on the -password change page. Furthermore, after dumping the OS image, we found -dozens of pieces of outdated software installed. All combined, the device had -more than a thousand known security issues. - -While the product we hacked was the worst offender across all the medical devices at the -hack-a-thon, the other four had a host of issues as well. -Another researcher found a different device would tell you the username and -password hash even if you provided the wrong password or username. Since the -password was not salted, you could simply search for the hash on Google to -find the password (the company's name!). - -Overall, it was a really interesting time and taught me a lot about the -mindset of vendors in the space, and the challenges ahead in improving security -in this sector. The vendors I spoke with are trying hard to address these -issues, which is evident by the fact that newer devices are much more secure -than older ones. I hope that we can work with them in the future to help -secure these devices in which even the smallest vulnerability can have a -profound impact on human health! diff --git a/_posts/2017-07-03-NTIA.md b/_posts/2017-07-03-NTIA.md deleted file mode 100644 index dc035eed..00000000 --- a/_posts/2017-07-03-NTIA.md +++ /dev/null @@ -1,77 +0,0 @@ ---- -layout: article -title: "SSL to NTIA: Secure Software Updates on the IoT" -subnav: blog -comments: true -tagline: 'On May 11, President Trump issued an Executive Order to improve -the “cybersecurity of Federal networks and critical infrastructure.” The -document is a call to action to a number of Federal agencies..' -author: 'Lois Anne DeLong' -categories: - - 'TUF' - ---- -On May 11, President Trump issued an Executive Order to improve the -“cybersecurity of Federal networks and critical infrastructure.” The document -is a call to action to a number of Federal agencies, each of which is charged -with developing strategies and policies to better protect key government -networks, systems, and data resources from attack. In seeking out the best -strategies for addressing one identified vulnerability, “Resilience Against -Botnets and Other Automated, Distributed Threats,” the National -Telecommunications and Information Administration (NTIA) issued a “request for -comments” (RFC) on June 8. The RFC invited all stakeholders, “including -private industry, academia, civil society, and other security experts” -to share “ways to improve industry's ability to reduce threats perpetuated -by automated distributed attacks, such as botnets, and what role, if any, -the U.S. Government should play in this area.” - -As a group that has long advocated the role of secure software update strategies -in addressing these types of vulnerabilities, our lab viewed this RFC as a way -to evangelize for these strategies, particularly in the fast-growing area of the -Internet of Things (IoT). A response, primarily crafted by SSL summer intern -Shikhar Sakhuja, was prepared over the -past few weeks to emphasize the need -for all IoT items to be securely updated to fix vulnerabilities—even those -currently deemed by their manufacturers as too inexpensive to warrant such -treatment. In doing so, the response emphasized that, without secure updates -to patch vulnerabilities, malware can be downloaded and signed like a regular -update. In turn, this malware can convert any smart device, even medical -equipment like defibrillators or imaging machines, into functional bots in a -botnet. - -The response was built around three main points, which are listed below along -with a few quotes from the arguments used to support them. - -* **NTIA should establish a Security Standard for Software Updates on IoT devices, -regardless of the manufacturer or brand of the product, as such a uniform -standard would establish a significant barrier for would-be attackers.** -Standardization can strengthen the security of IoT devices by removing the -argument that, since “many IoT devices are inexpensive, and likely to be -replaced after only a few years,” it would be “economically irrational” to -update them. “Since software updates can be key in securing IoT devices and -preventing their use as bots in the next DDoS attack, such an investment -must be encouraged.” - -* **NTIA should make compromise-resilience a mandatory component of the IoT update -security standards to protect vulnerable endpoints** Requiring compromise -resilience be built into any framework for updates can ensure that, even -after a successful attack, “the least number of IoT devices are affected and -minimal damage can be inflicted upon the affected devices.” In addition, such a -framework can better deal with new vulnerabilities as they are discovered, as -well as, “fix system crippling bugs and combat malware.” - -* **The NTIA and the Federal government have strong roles to play in establishing -an IOT consortium in which all the stakeholders in the IoT sphere can jointly -develop a compromise-resilient security update framework.** Here, the response -outlines a policy that SSL has pursued from the beginning: a commitment to -open source development, in which nothing is proprietary, and code and knowledge -is shared with the community so we can reach mutually-achieved solutions. -“We want to work with other stakeholders to design a compromise-resilient system -that would allow us to secure the chain of updates that could address security -bugs in IoTs, improve the devices’ performance, and defend the interconnected -web of devices that could redefine everything from manufacturing to medical -practices to the way we heat our water.” - -You can read the entire SSL response to NTIA at [https://docs.google.com/document/d/1ZHc5t8YtIxi3CwWcwfhtZJ_DPPHUu0oScgxvwlO8G-c/edit](https://docs.google.com/document/d/1ZHc5t8YtIxi3CwWcwfhtZJ_DPPHUu0oScgxvwlO8G-c/edit). -[Executive Order 13800](https://www.federalregister.gov/documents/2017/05/16/2017-10004/strengthening-the-cybersecurity-of-federal-networks-and-critical-infrastructure) and the original [RFC](https://www.ntia.doc.gov/federal-register-notice/2017/rfc-promoting-stakeholder-action-against-botnets-and-other-automated-threats) from NTIA are both available via the -Federal Register. diff --git a/_posts/2017-07-27-interns.md b/_posts/2017-07-27-interns.md deleted file mode 100644 index cdeb6e0a..00000000 --- a/_posts/2017-07-27-interns.md +++ /dev/null @@ -1,108 +0,0 @@ ---- -layout: article -title: "Welcome to Brooklyn: Summer is Intern Season" -subnav: blog -comments: true -tagline: 'Over the past few weeks, SSL has welcomed a diverse group of summer -interns to 2 Metrotech. A total of 11 undergraduate, master’s, and high school -students are now conducting hands-on research in advancement of...' -author: 'Lois Anne DeLong' -categories: - - 'Uptane' - - 'Seattle' - - 'Lind' - - 'in-totoAtoms of Confusion and' - - 'CrashSimulator' - ---- -Over the past few weeks, SSL has welcomed a diverse group of summer interns to -2 Metrotech. A total of 11 undergraduate, master’s, and high school students are -now conducting hands-on research in advancement of eight different lab -initiatives. Drawn from three different NYU campuses, and a number of other -academic institutions, the interns are tackling real-world projects ranging -from the design of a compromise-resilient update framework for devices on -the Internet of Things to learning more about kernel paths in order to design -safer virtual machines. - -This post offers a brief introduction to a few of the interns. In future posts, -the interns themselves may be sharing some insights on their projects and -lessons learned from their summer in Brooklyn. - -Shikhar Sakhuja and -Cynthia Xin Tong, both rising juniors, -come to us from NYU’s Shanghai and Abu Dhabi campuses, respectively. Shikhar -found himself in the somewhat unique position of serving as lab spokesperson -shortly after his arrival, as he worked with -Dr. Trishank Karthik Kuppusamy in -crafting a [lab response](https://ssl.engineering.nyu.edu/blog/2017-07-03-NTIA) -to a request for comments issued by the National -Telecommunications and Information Administration (NTIA). He will be researching -issues related to the design of secure software update systems for the -Internet of Things. Cynthia, who has experience in doing web development, -was given a brand new design challenge in her work with the -[Seattle](https://seattle.poly.edu/html/) project. -She is investigating current-day DHTs (Distributed Hash Tables) for their use -as key-value stores. As the summer progresses, she will be designing a library -for the RepyV2 sandbox that can interface with the DHT lookup system. - -Current NYU Tandon students Parina Kaewkrajang -and Yu Zhang are no strangers -to the Brooklyn campus, though Parina mentions it is “somewhat strange to be -running into professors all the time” now that she is on an administrative -floor. Parina, who will be entering her junior year, described her research -task on the [Lind](https://ssl.engineering.nyu.edu/projects#lind) project as -“analyzing the kernel to secure it.” Yu, a rising -sophomore, and a native of Shanghai, is working on the [Atoms of Confusion](https://atomsofconfusion.com/) -project and has been helping with a new iteration of an online study. - -Three of the new interns join us from the Indian Institute of Technology. -Shikher Verma is a senior at the Kanpur -campus of this institution, while -Sachit Malik and -Yash Gautam are from the Delhi campus. Shikher -and Sachit are both working on aspects of the [in-toto](https://in-toto.github.io/) -project, while Yash is working -on Lind, examining function calls between shared libraries. - -For Ryan Patton, a Long Island native and -rising senior from Williams College -in Massachusetts, his summer at SSL is a chance for more hands-on exposure to -research. As a predominantly liberal arts school, Ryan describes his CS program -at Williams as “theoretical.” However, he was somewhat surprised when he -initially found himself working on “30 year old kernel code.” His research -on the [CrashSimulator](https://ssl.engineering.nyu.edu/projects#crashsimulator) -project, a sprouting technology at the lab that simulates -real-world conditions for testing new application, is Indicative of how newer -systems are often built on old legacy code. - -Several of the interns have expressed appreciation for the “approachability” -of the professors in the department, and the helpfulness of the graduate -students with whom they are working (Santiago Torres-Arias, YiWen Li, -Preston Moore, Trishank Kuppusamy, -Dan Gopstein), along with research professor -Albert Rafetseder (Seattle, Sensibility) -and staff developer Lukas Pühringer -(Seattle, Sensibility, and in-toto). - -Also joining us this summer are Christopher Lo, -Shiv Lakhanpal, and Brandon Zhu. - -Though it was still early in their stay when interviewed, almost all the interns -had already experienced some surprises. Parina was somewhat amazed at how much -time is spent “looking up things you don’t understand on Google or Stack -Overflow,” a point that many of the interns agreed with. -For Shikher, the surprise was the potential immediate impact of his work. As a -physics major at IIT, research meant “working on something that would be applied -150 years from now.” The idea that the work he is doing today could be -implemented in a matter of weeks or months was an appealing realization. -Sachit agreed with this assessment that it was a different experience to be -“working on practical things that could be deployed so quickly.” - -On a somewhat more personal note, Sachit observed he was surprised to find a -piano over at Metrotech. He has been enjoying the chance to play the instrument -in his spare time. While improving piano skills isn’t the goal of the -internship, Prof. Cappos is supportive of -students branching out. -“Having diverse and unique perspectives helps us to tackle a problem in a -different way.” diff --git a/_posts/2017-08-15-CCHackathon.md b/_posts/2017-08-15-CCHackathon.md deleted file mode 100644 index e86ef109..00000000 --- a/_posts/2017-08-15-CCHackathon.md +++ /dev/null @@ -1,79 +0,0 @@ ---- -layout: article -title: "CacheCash Hackathon: It Takes a Village (or a Lab…)" -subnav: blog -comments: true -tagline: 'A hackathon can be likened to the barn raisings of old, in which a community would come together for one day to build a barn or a home for a neighbor. Like those earlier communal activities, a hackathon involves a group of individuals collaborating...' -author: 'Lois Anne DeLong' -categories: - - 'CacheCash' ---- - -A hackathon can be likened to the barn raisings of old, in which a community -would come together for one day to build a barn or a home for a neighbor. Like -those earlier communal activities, a hackathon involves a group of individuals -collaborating on one shared project within a set time period. On Friday, July -28, SSL brought members of its community together—Vladimir Diaz, -Sebastien Awwad, -Lukas Pühringer, -Artiom Baloian, -Santiago Torres Arias, Ghada Almashaqbeh, and -Justin Cappos— to collaborate on a demo for -one of the lab’s sprouting projects, CacheCash. -As described by Almashaqbeh, a lead researcher on the project, the -demo prepared that day offers a “full-fledged prototype of the content -distribution service supported by CacheCash.” - -One of the newer lab initiatives, CacheCash is a cryptocurrency that provides a -decentralized, adaptable, and low-overhead approach to the construction of -dynamic CDNs. By having end users organically set up new caches to serve -content as they collect cryptocurrency payments, CacheCash bypasses the -mediator stage of content delivery network (CDN) companies, and enables caches -to dynamically come and go as demand for them dictates. In addition, CacheCash -addresses the security issues often encountered in monetary-incentivized -distributed systems through a variety of cryptographic and financial techniques, -yet adds only minimal overhead. Thus, it is an innovative way of utilizing -cryptocurrencies to provide a useful service. - -The demo represents an important step forward in implementing and deploying -CacheCash. Almashaqbeh observed that the project team has already “implemented -the core functionality that allows content providers to construct dynamic -content delivery networks (CDNs),” so the hackathon demo focused on “showing -potential clients how to retrieve video content.” The new website “displays -informative messages about the operations that happen behind the scenes -while the video is being retrieved,” including the number of contacted caches, -their IP addresses, the number and sizes of data blocks retrieved, and any -delayed responses from the caches. - -Other tasks completed during the Hackathon were: -* Preparing a [manual](https://github.com/Cache-Cash/CacheCash/blob/master/README.md) -for CacheCash that offers a high-level technical description of its design, -and documents all the steps needed to compile code for a reference -implementation of the project. -* Investigating the selection algorithms of other caches and evaluating their -effect on levels of service quality. This is essential because, according to -Almashaqbeh, “the current implementation of CacheCash selects caches in a -random fashion to serve client requests.” The algorithms included both those -that adopt locality and those that adopt bandwidth as the selection criteria. -* Creating a [“dockerized” version](https://github.com/SantiagoTorres/cachecash-dockerized) -of CacheCash that can automate the testing/deployment process. This version is -a repository containing all the necessary components to download and run the -program. A potential demo user can now “simply clone the repository, -run docker-compose and try out CacheCash on her browser,” according to -Torres Arias, who created it. - -As a side task during the Hackathon, unrelated to the demo website, the team -investigated a few cryptocurrency options to support the monetary incentives -in the system. One such option was implementing CacheCash as a token on top of -Ethereum, an open software platform, based on blockchain technology, which -enables developers to build and deploy decentralized applications. The team -looked at the overhead generated by making payments using the “smart contracts” -functionality supported by Ethereum. The results suggest such an option would -greatly increase the cost of CDN service for CacheCash. Hence, the team is now -looking into ways to either optimize the smart contract code, or switch efforts -to a different cryptocurrency system. - -In looking back on the work accomplished, Almashaqbeh reported that the next -step will be to try running the demo “on a larger scale across different -geographic areas.” It should be available to those interested in trying it out -by the end of August. diff --git a/_posts/2017-10-18-UptanePopSci.md b/_posts/2017-10-18-UptanePopSci.md deleted file mode 100644 index 8dc4d79a..00000000 --- a/_posts/2017-10-18-UptanePopSci.md +++ /dev/null @@ -1,22 +0,0 @@ ---- -layout: article -title: "Uptane named one of the Top Security Innovations of 2017 by Popular Science" -subnav: blog -comments: true -tagline: '' -author: 'Justin Cappos' -categories: - - 'Uptane' ---- - - -We are excited to announce that the secure automotive software update -project Uptane was selected by [Popular Science](https://www.popsci.com/top-security-innovations-2017#page-2) as one of the -top security innovations of 2017. Uptane was developed in collaboration with -the [University of Michigan Transportation Research Institute (UMTRI)](http://www.umtri.umich.edu/), -and the [Southwest Research -Institute (SwRI)](http://www.swri.org/), and is supported by contracts from the -U.S. Department ofHomeland Security, Science and Technology Directorate (DHS S&T). - -Congratulations to all involved, especially Dr. Kuppusamy (aka Trishank) -who recently defended his dissertation on Uptane! diff --git a/_posts/2017-12-07-TUF-CNCF.md b/_posts/2017-12-07-TUF-CNCF.md deleted file mode 100644 index 8682fc9b..00000000 --- a/_posts/2017-12-07-TUF-CNCF.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -layout: article -title: "TUF Featured at CloudNativeCon" -subnav: blog -comments: true -tagline: 'A few months ago, TUF was adopted by the Linux Foundations [Cloud -Native Computing Foundation](https://techcrunch.com/2017/10/24/the-cloud-native-computing-foundation-adds-two-security-projects-to-its-open-source-stable/). -The CNCF provides a more...' -author: 'Justin Cappos' -categories: - - 'TUF' ---- - -A few months ago, TUF was adopted by the Linux Foundation's [Cloud Native -Computing Foundation](https://techcrunch.com/2017/10/24/the-cloud-native-computing-foundation-adds-two-security-projects-to-its-open-source-stable/). -The CNCF provides a more formal foundation structure to the project and -has encouraged us to participate in positive efforts such as working toward CII -badging. There is also the potential for substantially more adoption as the -CNCF pushes for more use of TUF by different teams. We are looking forward to -working toward this. - - -We had a meeting today with attendees interested in TUF and Notary to help -new contributors get started with the project. Most of the discussion was -around a few of the [TAPs](https://github.com/theupdateframework/taps), in -particular [Multiple repository support (TAP -4)](https://github.com/theupdateframework/taps/blob/master/tap4.md), support -for [setting URLs for roles in the root metadata file (TAP -5)](https://github.com/theupdateframework/taps/blob/master/tap5.md), and -[key rotation / self-revocation of -keys](https://github.com/theupdateframework/taps/blob/master/tap8.md). -We also had a broader discussion around the use of the custom metadata field -for TUF targets, which David Lawrence mentioned a neat use case for. I am -really happy about all of the positive effort and attention that the project -is getting! - -However, there are some other things we did not expect. There was TUF -merchandise for sale at the conference, including hoodies and T-shirts. -There also was an ice sculpture of the TUF logo on display at an event last -night. A very interesting way to promote the project! - - -Thanks, -Justin - diff --git a/_posts/2018-01-08-WebUSB.md b/_posts/2018-01-08-WebUSB.md deleted file mode 100644 index 75e0238c..00000000 --- a/_posts/2018-01-08-WebUSB.md +++ /dev/null @@ -1,246 +0,0 @@ ---- -layout: article -title: "Creating a Web-enabled USB Driver with WebUSB" -subnav: blog -comments: true -tagline: 'WebUSB is an emerging technology that opens numerous possibilities -for interaction with hardware devices without the need to install any drivers -on the user side. This could be very useful for playing web-based games...' -author: 'Santiago Torres-Arias' -categories: - - 'Informational' - ---- -WebUSB is an emerging technology that opens numerous possibilities for interaction -with hardware devices without the need to install any drivers on the user side. -This could be very useful for playing web-based games (e.g., having a joystick -for a game), utilizing web services (e.g., a 3D printer driver) or, as in the -case described below, completing two-factor authentication with USB-enabled -hardware tokens. Sadly, because it is an emerging technology, I found little -documentation on how to write the appropriate code for a WebUSB driver. So, -given that there are currently no applications supporting the development of -these drivers, I decided to document the process of writing one for a YubiKey, - using hash-based one-time-passwords (HOTP). - -If you are curious as to why I initiated this effort, it grew out of a summer -project here at NYU-SSL in which we used PolyPasswordHasher to support two -factor authentication (2FA) by employing a Yubikey to increase the average -entropy of the database. This, in turn, protects all the passwords. One of -the goals that I had set for the summer was to create a demonstration website -for a PPH-protected database. A user could register a yubikey at this site, -and then log in using HOTP for 2FA. - -Sadly, the ecosystem for browser USB extensions feels like a wasteland of -deprecated or not-so-usable technologies: - -* You could write a plugin, but that's incredibly insecure and likely to be -deprecated in one or two years. This is a very insecure option because the -plugins run native code on the user’s system, sometimes without proper -sandboxing. -* You could use Chrome's USB extension library but, guess what, that's also going -to be deprecated in time. I personally wanted to avoid anything that would have -to be reimplemented in just a year or two. -* You can try to ship a binary with a browser extension, but that would start -more cross-platform compatibility problems than I can list here. I also don’t -think it is reasonable to ask users to download some binary in order to use a -demo website. - -This leaves us with a somewhat experimental technology: WebUSB - -## Enter WebUSB - -WebUSB is a standardized technology to provide a bridge for websites that can -connect to user USB devices using JavaScript. You can look at it as if the -website was providing you with a USB driver, along with the Javascript you -usually send to, say, animate the elements on your website. - -At first glance, this may sound like a security nightmare. Shipping code that -can access a user's hardware could be problematic. However, WebUSB is an -improvement, security-wise, compared to the previous alternatives for the following reasons: - -* The code is not running outside of a sandbox, like a plugin would be -(e.g., flash). -* Permissions must be granted by the user to allow a website to access a -USB device. -* Some devices, like USB keyboards, are not accessible to WebUSB (e.g., to -avoid keylogging) - -That being said, I would still advise potential users to be wary of new -technologies, as they need a thorough auditing before they can be trusted for a -security-sensitive deployment. Issues are often found in early deployments of -all technologies (take for example, [this](https://labs.mwrinfosecurity.com/blog/webusb/). - -Besides the less-than-perfect security aspects of WebUSB, the only drawback -that I found is a lack of documentation on how to write a WebUSB device handler. -Here, I'll document how I “reversed” the Yubikey ykchalresp binary and wrote a -WebUSB driver for a Yubikey with HOTP enabled. (Note: Since the code is -open-source, I could have just read it as is, but this may not always be the -case. This exercise offers a way to write drivers when the code is not -available.) - -## Setting up your dev environment - - -In order to develop for WebUSB, you need to move a few things around. -First, you need the latest version of Chromium. Second, you need to run it with - a couple of flags and a local web server to serve your WebUSB JavaScript -flies. Third, you may need to enable a couple of flags within Chromium to -enable experimental features. - -Start Chromium like this: - -```bash - -$ chromium --disable-web-security --allow-insecure-localhost - -``` -We'll be serving the files using a plain http server from Python (though you -can use any server you may feel comfortable with to host the files). By default, -WebUSB is not enabled if the content is not served through HTTPS with a trusted -certificate (++ for security here). A complete list of flags can be taken from -[this site](https://peter.sh/experiments/chromium-command-line-switches/), -in case you're curious, although you will not need more than these two. - -Finally, depending on the age of your version of Chromium, you may need to -enable the experimental features by navigating to https://peter.sh/experiments/chromium-command-line-switches/ and enabling -a flag called "Experimental web platform features." If you have done this, then -you will need to restart your browser. - -After setting up Chromium, you can start serving your local WebUSB files like so: - -```bash - -$ python3 -m http.server - -``` -Cool! Now you should be able to navigate to localhost and play around with -WebUSB and your device. - -## Sniffing the USB device - -Another necessary task is to understand what the original USB driver is sending -to the device in order to replicate it. Although you may want to write -something that already has an implementation using other libraries (e.g.,libusb), -or a specification describing these tasks, you may run into devices that are not -documented. (Again, this was not the case with the YubiKey device, but, as you -recall this is also an exercise in reversing an undocumented protocol, so we -proceed as if there was no specification). If there are no documents on how to -interact with your device, a simple pcap using Wireshark can work wonders. - -## Setting up Wireshark for USB sniffing - -To sniff USB traffic under Linux, a user needs to load a few modules and change -a few permissions in Wireshark. The instructions are taken from -[this article](https://wiki.wireshark.org/CaptureSetup/USB) article, but I'll -inline the Linux instructions here for the sake of readability: - -First, load the usbmon kernel module: - -```bash -# modprobe usbmon -``` - -This will create a series of /dev/usbmonN devices. You need to make them -readable by regular users: - -```bash -$ sudo setfacl -m u:$USER:r /dev/usbmon* -``` - -Having done this, you can launch Wireshark and pick an interface to sniff. The -one to pick can be easily seen using dmesg. Launch dmesg on follow and then plug -in your device. You should see something like this: - -```bash -[ 6350.949823] usb 1-4: new full-speed USB device number 9 using xhci_hcd -[ 6351.093360] input: Yubico Yubikey NEO OTP+CCID as /devices/pci0000:00/0000:00:14.0/usb1/1-4/1-4:1.0/0003:1050:0111.0006/input/input22 -[ 6351.150902] hid-generic 0003:1050:0111.0006: input,hidraw0: USB HID v1.10 Keyboard [Yubico Yubikey NEO OTP+CCID] on usb-0000:00:14.0-4/input0 -``` - -The important things to note about this part of the log is _usb 1-4_. This means -it was connected to the usbmon1 interface. You can also know what "address" -Wireshark will use from the information on the rest of the line (1.9.x). -A sample Wireshark capture of a packet going to our USB device would look like -this: - -``` -398 19.946717 host 1.9.0 USB 72 URB_CONTROL out -``` - -This was a packet sent from a laptop into 1.9.0, the device that was just -connected. Using Wireshark, we can capture the "conversation" between the -laptop and the yubikey (or any other device), and be aware of what is being -sent and received. - -In this case, the host sends a series of URB_CONTROL out message(s) with -certain flags and the challenge to hash, waits for a status flag to be set on -the replies, and starts reading the resulting HOTP hash. You can see the -relevant bits of the conversation on packets 8 to 47 in this [pcap](https://ptpb.pw/0Pnn.pcapng). - -## Translating sniffed packets into WebUSB calls - -Now that we know what we need to do, we can try to replicate the behavior using -WebUSB to interact with our devices. For example, the details of the packet -listed above are as follows: - - - -This can be translated into the following WebUSB call: - -``` - Device.controlTransferOut({ - "recipient": "interface", - "requestType": "class", - "request": 9, - "value": 0x0300, - "index": 0 - }, Data); -``` - -You may suspect that some of the values on the Wireshark scan are the same as -the arguments sent to the control transfer out. Well, it *is* that simple. If -you don't want to understand what these values mean (I certainly won't cover -them here), you could just blindly build the same request and see how the device -behaves. - -These calls return a promise object, which resolves with the data the device -contains after our call. We would have to chain these promises to effectively -have a conversation with our yubikey. However, this may not be as -straightforward as with other approaches. - -## Fun with promises - -The WebUSB API relies on promises, which make writing driver-like (e.g., -polling and/or synchronous) code a little weird. This is because WebUSB is -merging two worlds: one being the weird JavaScript "asynchronicity" on the -web-space, and the other, the structured-protocol, raw-byte-handling world of -low-level device interaction. This construction will often lead to a design -pattern: nested-promises. - -A nested WebUSB promise, in simple terms, is something that does the following: - -1. Start a promise by sending a request. The result will be handled by another -promise -2. The second promise will check whether the request is ready (i.e., the read (i.e., the read - frame says "good to go") - * a. If it's not ready, start another promise exactly like the one in step - * b. If it is ready, then move on and resolve the “outer” promise, so we can - continue onward to the next step. - -This may be easier to picture in the diagram below. - - - -This construction makes it so that the outer promise can construct a promise -chain that follows a structured protocol, such as the one used by the -YubikeyHOTP interface. In contrast, the inner promise chain will make the outer -promises hold on until the device is ready for the next step. - -This way, we can write a USB device handler that looks pretty much like the -drivers you would write with libusb, but can do so using the async and -pretty JavaScript-y API of WebUSB. - -Writing WebUSB handlers/drivers is rather fun and easy once you get started. -Reversing usb devices is a fun side-project that may keep you interested/busy -for a good week, while learning a little about the things we plug into our -computers every day. diff --git a/_posts/2018-02-13-FogComp.md b/_posts/2018-02-13-FogComp.md deleted file mode 100644 index 11ef95cb..00000000 --- a/_posts/2018-02-13-FogComp.md +++ /dev/null @@ -1,104 +0,0 @@ ---- -layout: article -title: "Seattle and Fog Computing: Bringing the Cloud Closer to the IoT" -subnav: blog -comments: true -tagline: 'There is no doubt that cloud computing, in one form or another, is here to stay. In 2017, -Gartner, a prominent research -firm, estimated an 18% growth in worldwide revenue...' -author: 'Lois Anne DeLong' -categories: - - 'Seattle' - ---- -There is no doubt that cloud computing, in one form or another, is here to stay. In 2017, -[Gartner](http://www.gartner.com/newsroom/id/3616417), a prominent research -firm, estimated an 18% growth in worldwide revenue from the technology by year's end, -to a total of $246.8 billion. In the light of this growth, it is perhaps not surprising -that cloud computing has already birthed a number of alternative configurations. Unfortunately, -the alternative configurations have also given birth—to a messy hodgepodge of terminology, -such as edge computing, mist computing, cloudlets and fog computing. - -The problem with this situation is that these terms are often used -interchangeably, even when the systems they describe have significant differences. -In addition, without standardized terminology, it is difficult to establish a timeline -for the development of a technology. For example, while the term "fog computing" may have -been commonly used only in the last two to three years, incarnations of what could be called -the basic principle of fog computing—replacing -a centralized cloud with distributed units that -can do all the necessary computation in a data hub on a smart device, -or in a smart router or gateway—have been around almost since the beginning of the -cloud itself. One such example is SSL's own [Seattle Testbed](https://seattle.poly.edu/html/), -which for close to a decade has allowed researchers to securely run -code on a variety of device (laptops, tablets, smartphones) using -computation power and storage donated by universities and individuals. - -Last year, the National Institute of Standards and Technology stepped in to "clear -the fog" and provide some needed clarity to how we talk about this technology -by publishing a document of accepted definitions, characteristics, acronyms and -abbreviations. The brief report, called simply “The NIST Definition of Fog Computing,” -sets out to provide “clear distinction” between “fog computing...and related concepts." -The official definition for fog computing put forward in the document is: - ->Fog computing is a horizontal, physical or virtual resource paradigm that ->resides between smart end-devices and traditional cloud or data centers. ->This paradigm supports vertically-isolated, latency-sensitive applications ->by providing ubiquitous, scalable, layered, federated, and distributed ->computing, storage, and network connectivity. - -The NIST document could not be more timely as fog technologies will be relied on -to support the growing computing needs of smart devices on the Internet of -Things (IoT). The "fog computing" label is a playful interpretation of the way -the architecture of such systems “brings the cloud down to the -ground,” by closing the gap between where data is created and where it is -acted upon. In principle, fog computing systems can perform tasks faster and more -efficiently, as they eliminate the need to send everything to the cloud -for processing. And, with the Gartner firm estimating that, in -just two years time, there will be more than -[20 billion smart devices](http://www.zdnet.com/article/iot-devices-will-outnumber-the-worlds-population-this-year-for-the-first-time/), -the flexibility and distributed nature of fog computing systems -will be needed to support that growth. - -The researchers involved with the aforementioned Seattle made the case last -fall for why it is in a good position to support the -"vertically-isolated, latency-sensitive applications" cited in the NIST definition. -Fog computing comes with a number of challenges. The first is the rapidly -expanding variety of smart devices.[Yi et al.](http://www.cs.wm.edu/~syi/publications/mobidata15_1.pdf) -note that the variety of resources that can act as servers in a fog system range -from “resource-poor devices such as set-top boxes, access points, routers, switches, -base stations, and end devices, or resource-rich machines such as -Cloudlet...a ‘cloud in a box’ available for use by nearby mobile devices.” As noted in a [paper](https://github.com/aaaaalbert/fogwc/raw/SUBMITTED/paper.pdf) -presented by SSL research professor Albert Rafetseder at the inaugural Fog World -Congress last November, Seattle is already running on -a variety of heterogeneous nodes, or alternative platforms, such as Android devices, -resource-limited structures like Raspberry Pis, or routers and embedded devices -running OpenWrt. In addition, the “loose coupling and -precise trust boundaries” of Seattle’s components enable “deployments with -minimal mutual trust requirements,” and allow new infrastructure components -to be “introduced freely to replace or augment existing ones, as long as -the component interfaces are adhered to.” This supports the distributed nature -of fog systems. - -Most importantly, according to the paper's authors — Rafetseder, Lukas Pühringer, -and Justin Cappos —Seattle’s proven track record in protecting the safety of host -devices and the security of any data on them as a plus for its use in fog computing -applications. Seattle’s [sandboxed environment](https://github.com/SeattleTestbed/repy_v2/blob/master/README.md), -which isolates code run on the devices from other applications and data, -also imposes strict usage quotas for all resources of the hosting system, -including Central Processing Unit (CPU) time and memory, used disk space, -and even IP addresses and port numbers on network interfaces. And, from a -security standpoint, isolation keeps buggy or deliberately destructive code -from harming the host machine. - -Since fog computing is still in its infancy, it may be too soon to predict -what components or systems will ultimately become standard. In a conversation -conducted shortly before the paper was presented, Rafetseder described the -“uncertainty” attached to the Fog concept. “We don't know if it will be widely -deployed, what deployments will actually look like, or what companies and use -cases there will be. However, we know from experience with Seattle that our -architecture is quite well prepared for whatever shape the landscape will turn out to take.” - -Note: The final version of the NIST publication must be purchased, but the -draft circulated for comment last August is available free of charge at (https://csrc.nist.gov/csrc/media/publications/sp/800-191/draft/documents/sp800-191-draft.pdf). -The final draft, released by the 4th Watch Publishing Co. in November, can be -ordered in both Kindle and print formats from Amazon. diff --git a/_posts/2018-04-03-SensHack18.md b/_posts/2018-04-03-SensHack18.md deleted file mode 100644 index abc67297..00000000 --- a/_posts/2018-04-03-SensHack18.md +++ /dev/null @@ -1,104 +0,0 @@ ---- - -layout: article -title: "SAS Hackathon Creates Teachable Moments for Sensibility Developers" -subnav: blog -comments: true -tagline: 'Prior to its 2014 gathering, the IEEE Sensors -Application Symposium (SAS) invited the Sensibility Testbed team from SSL -to run a hands-on, day-long workshop for participants at the conference in -Queenstown, New Zealand. At the time...' -author: 'Lois Anne DeLong' -categories: - - 'Sensibility' - ---- - -Prior to its 2014 gathering, the IEEE Sensors -Application Symposium (SAS) invited the Sensibility Testbed team from SSL -to run a hands-on, day-long workshop for participants at the conference in -Queenstown, New Zealand. At the time, the fledgling testbed project was new -enough that it had only been dubbed “Sensibility” -in August of the previous summer. As such, the workshop offered the project -an exciting showcase in which teams of 3 to 4 participants used -Sensibility to build applications that could -securely access and creatively apply data from smartphone sensors. The initial -Hackathon attracted about 20 people, and a team from the University of Houston -took the top prize for an app that monitored battery levels and informed the -user to turn off Wi-Fi or Bluetooth when power levels fell too low. - -The SSL team has gone on to host five more events at SAS conferences, counting -this year’s program on March 13 at the Koreana Hotel in Seoul, Korea. The apps -produced onsite at the workshops have included one that can detect when a device -owner slips and falls (winning entry in 2016 Workshop -in Catania, Italy, developed by a team with members from Qatar and Norway) -and another that scans nearby WiFi networks to identify the access router with -the best signal quality (winning entry in last year’s event in Glassboro, NJ, built -by Claudio Crema from the University of -Brescia, and Majed Alowaidi from the University of Ottawa.) - -While sparking innovation in its participants, the workshops have also -been a learning experience for the Sensibility team. In a paper titled “Making -Sensibility Testbed Work for SAS,” Yanyan Zhuang (University of Colorado-Colorado -Springs), Richard Weiss(Evergreen College), Albert Rafetseder, and -Justin Cappos (NYU Tandon) describe -how the workshops helped them improve the basic design of the testbed. It also -taught them a few lessons about how to run a successful hackathon. - -As described in the paper, during the second hackathon, held in Zadar, Croatia, in -2015, a few complications emerged relating to two particular components. The XML-RPC -interface, a remote procedure protocol was initially used to communicate between -the Sensibility app, which was written in Java, and the sandbox, written in a form -of Python. However, in observing participants working with the testbed, -it became clear that this interface was slowing down performance, particularly -Sensibility’s ability to access accelerometers and gyroscope at high frequencies, -as needed for accurately studying the motions of a device. Even worse, as the -authors observe, “the XML-RPC is not a secure communication channel. If an attacker -learns about the designated port, he or she can use an XML-RPC call to get all -the sensor data as is desired.” - -The other problematic component was a third-party library called Scripting -Language for Android (sl4a), which enabled sensor interfaces in Java. Initially, -once the Sensibility app was installed, it could automatically download this library. -But, after a Google Play store policy change eliminated “in-app” installation, -the library had to be downloaded manually. This meant an additional step for -Hackathon participants, taking up time and making the installation process -more complicated. - -By the 2016 Hackathon, in Catania, Italy, the research team had created its own -interface for Sensibility, “by using the Java Native Interface (JNI) to define -Android app interfaces into the Android app on one side, and a custom Python -interpreter that then hosts the Sensibility Testbed sandbox on the other side.” -This change, coupled with a few other design modifications (writing and compiling -custom sensor bindings into the Python interpreter using the CPython API, and -the creation of wrapper functions for the sandboxed code “so that the extent -of sensor access can be tailored,” ) meant that both sl4a and the XML-RPC -interface could be eliminated. Now, installing Sensibility was a much simpler -one-step process, and the testbed was both easier to use and more secure. - -However, just because your design is optimized doesn’t mean your event will -run smoothly. The authors note that the 2016 event was hosted in the Museo -Diocesano in Catania, a museum holding artifacts that date back to the 13th -Century. The intermittent and poor quality WiFi in the building seemed -designed to match the age of its surroundings. Whether compensating -for slow WiFi, or making sure the testbed kept pace with the ever-evolving -Linux kernel on which Android devices are based, the hackathons forced -the research team to stay a step ahead. Sometimes that meant improving -documentation for those programming in Python for the first time, other times -it meant guiding participants using Windows laptops through the process -of downloading the Python environment. - -Lastly, the team acknowledges that balancing Sensibility between the -security and privacy needs inherent in running on donated devices, -and the usability features required to allow rookie programmers to -install and design apps with minimal training, is an ongoing challenge. -Pointing to the cameras and microphones on devices, which Sensibility -automatically disables access to in order to protect the privacy of donors, -the researchers note that participants often request access to these sensors -“for applications like facial recognition and intrusion detection.” The -answer to such requests is always “no.” Though it’s “made some applications -impossible to implement,” the authors conclude, “the resulting security -benefit was greater.” - -To read more about the SAS workshops and Sensibility Testbed, access -the article [here](https://ssl.engineering.nyu.edu/papers/zhuang_sensibility_sas_2018.pdf). diff --git a/_posts/2018-05-06-TUFCCI.md b/_posts/2018-05-06-TUFCCI.md deleted file mode 100644 index 24c0d3c7..00000000 --- a/_posts/2018-05-06-TUFCCI.md +++ /dev/null @@ -1,79 +0,0 @@ ---- - -layout: article -title: "TUFening the Cloud: CII Badge Acknowledges Standards of The Update Framework" -subnav: blog -comments: true -tagline: 'When the Linux Foundation adopted The Update Framework (TUF) in October -of 2017, it both recognized what the software update framework had already -achieved to date and set a new standard for it to strive for. As one of...' -author: 'Lois Anne DeLong' -categories: - - 'TUF' ---- - -When the Linux Foundation adopted The Update Framework (TUF) in October of 2017, -it both recognized what the software update framework had already achieved to -date and set a new standard for it to strive for. As one of only 14 projects -under the umbrella of the Cloud Native Computing Foundation (CNCF), -TUF is expected to forward the group’s mission of “making cloud-native computing -universal and sustainable.” The CNCF vision of cloud-native computing as -“enabling cloud portability without vendor lock-in” meshes well with -TUF’s longstanding commitment to keep its architecture and specification open, -concise, self-contained, and able to be integrated into any software update system. - -But, opportunity also brings responsibility, and the TUF research team wasted -little time in making sure it was living up to the “best practices for open -source projects” advocated by CNCF. These best practices are codified in -the CNCF Core Infrastructure Initiative Best Practices, -or simply the CII Badge program. By providing a standardized form of certification -in the open source community, the group aims, in the words of TUF senior -developer Vladimir Diaz,“to improve code readability, maintainability, security, -community involvement,” and increase “contributions from people outside of the project.” - -The CII program was announced in May of 2016 and certified a core group of open -source software projects shortly afterwards. Initially all badges were one level, -and were referred to simply as “passing.” A little more than a year later, Gold -and Silver badges were added to recognize projects that are willing to commit -to higher standards. To earn a silver badge, which TUF completed at the beginning -of May, it needed to attest that it had adopted a code of conduct, that its -governance mode was clearly defined, and that it used at least one static analysis -tool to look for common vulnerabilities in the analyzed language or environment. -To date, TUF is only the fourth organization to achieve Silver badge status. - -Applying for a badge is not overly complicated. Individuals can voluntarily self-certify -their projects, at no cost, by using a web application to explain how they follow -each of the listed best practices. These practices include having a website -that clearly describes the nature of the project, its governance and -contribution processes, licensing information, access to documentation on how -to create, manage, and use software, and a mechanism for bug reports, comments, -and contributions. In total, the site asks questions about 66 criteria organized -into 6 categories, including quality and security. - -The home page for the [badge](https://bestpractices.coreinfrastructure.org/en) -notes that the program was inspired by “the many badges available to projects on -GitHub.“ As of the time of this writing, 169 projects, including prominent -FLOSS (Free/Libre and Open Source Software) programs such as GnuPG, blender, -GitLab, Kubernetes and Node had achieved passing status. Though the certification -process is voluntary and carries no legal or regulatory status, the badge does -offer consumers a quick way to check what projects are following best practices -and, as a result, might be “more likely to produce higher-quality secure software.” - -For Diaz, there is another advantage to participation in the CII program. -“It makes it easy to identify, learn about, and keep track of the best practices -that our project might want to adopt. Our responses in the self-certification also -makes it easy for the CNCF to go through them and verify that we actually follow -these best practices.” - -After earning the CII badge, the TUF project was highlighted on the program’s front -page. David A. Wheeler, who leads CII Badge, wants the program to "highlight -different kinds of security-related projects that have badges, since we're particularly -interested in securing critical software. In addition, we want people to learn about -software distribution hardening systems like TUF; we hope that highlighting TUF would -make it a little more visible." TUF and Notary, Docker’s implementation of TUF, were -the first security projects adopted by CNCF and thus reflect this increased emphasis -on security. - -For more information on the CII Best Practices Badge program, go to the -[criteria](https://github.com/coreinfrastructure/best-practices-badge/blob/master/doc/criteria.md), -or [statistics](https://bestpractices.coreinfrastructure.org/en/criteria) pages. diff --git a/_posts/2018-06-05-18-Atomsupd.md b/_posts/2018-06-05-18-Atomsupd.md deleted file mode 100644 index 0eb22250..00000000 --- a/_posts/2018-06-05-18-Atomsupd.md +++ /dev/null @@ -1,103 +0,0 @@ ---- - -layout: article -title: "What makes confusing code so confusing? Current Atoms initiatives look -for the Hows, Whys and Wheres" -subnav: blog -comments: true -tagline: 'The Atoms of Confusion project deals with perhaps the most random -variable in the development of software—the human programmer who writes and/or -maintains the code. The contention of...' -author: 'Lois Anne DeLong' -categories: - - 'Atoms of Confusion' ---- - -The Atoms of Confusion project deals with perhaps the most random variable in -the development of software—the human programmer who writes and/or maintains -the code. The contention of the Atom’s research team is that there can be issues -within a piece of code that cause programmers to make costly or potentially -damaging assumptions about its output. As such, the project explores how elements -in the code affect human comprehension. - -In a relatively short span of time, the group has conducted and evaluated the -results of two carefully constructed user studies, and has obtained empirical -proof that there are small, self-contained patterns within lines of code that -could cause programmer confusion. To date, 15 of these patterns have been -identified and confirmed as “Atoms of Confusion,” including several ignored by -most commonly-used style guides, and, one---the use of curly braces---where our -findings contradicted both the [NASA](http://homepages.inf.ed.ac.uk/dts/pm/Papers/nasa-c-style.pdf) -and [Linux](https://slurm.schedmd.com/coding_style.pdf) style guides. - -By the time the paper documenting these study results was presented at the -Foundations of Software Engineering Conference in September, 2017, the -research team—which in addition to students, faculty, and staff at NYU Tandon -also includes personnel at the Pennsylvania State University and the University -of Colorado, Colorado Springs---had opened its research efforts on several new -fronts. One group began to measure factors that can affect levels of confusion, -such as where the atoms are located within a piece a code, while another looked -at how these confusing patterns influence brain activity in developers. Still -another group set out to see just how omnipresent these atoms are within code -"in the wild," by conducting a quantitative assessment of the frequency of atoms -in real-world software. - -Here is a brief review of the research initiatives the Atoms of Confusion -project has undertaken over the past year. - -#### __Atoms Do Exist in the Wild__ - -While the initial studies did prove the existence of Atoms of Confusion, its -original code corpus was selected precisely for its likelihood to contain atoms. -That left an important question to answer: were these confusing patterns just as -prevalent “in the wild?” A group led by NYU Tandon Ph.D. student Dan Gopstein -identified a corpus of 14 of “the most popular and influential open source C -and C++ projects” to measure the amount of atoms, if any, that they might -contain. They found that the 15 confirmed atoms occurred “millions of times -in programs like the Linux kernel and GCC, appearing on average once every 23 lines.” - -The research team, which also included Hongwei Henry Zhou, and faculty members -Phyllis Frankl and Justin Cappos, summed up the significance of this work by -noting it demonstrated “that beyond simple misunderstanding in a lab setting, -Atoms of Confusion are both prevalent---occurring often in real projects---and -meaningful,” as they are “being removed by bug-fix commits at an elevated rate.” -A [paper](https://atomsofconfusion.com/papers/atom-finder-msr-2018.pdf) -documenting the Atoms Finder work was presented at the Mining Software Repositories -conference in Gothenburg, Sweden in May, and was honored by conference organizers -as a distinguished paper. This marks the second time this recognition has been -given to an Atoms paper, and also the second time a paper on which Gopstein -was lead author has been so honored. - -#### __What Programmers are Really Thinking When they Think about Code__ - -As the Atoms project evolves, one question the team continues to return to is -“why are these code patterns confusing?” While the answer probably lies in a -confluence of physical and psychological factors, one line of investigation -that could prove helpful is learning how confusion manifests itself in the -brain waves of programmers. Martin K.C. Yeh, an Atoms team member from the -Pennsylvania State University, has been using an inexpensive, non-invasive -EEG device to record the brain activity of developers when shown both confusing -and non-confusing code snippets. The results of a pilot study on 8 subjects -indicate that more neurons may be active when a subject is solving confusing -code snippets. - -Yeh summarized his findings from the pilot study in a [paper](https://atomsofconfusion.com/papers/program-comprehension-eeg-2017.pdf), -delivered at the Frontiers in Education -Conference in October 2017. Coauthored by Gopstein, Yanyan Zhuang, and Yu Yan, -the paper also suggested that “intelligent tutoring systems” might -be able to incorporate EEG “as an input to provide detailed explanations, -extra practices, additional examples, or select different instructional -strategies” when brainwaves suggest confusion. - -Since that presentation, Yeh has completed a second round of tests using a larger -subject pool and he is currently analyzing the results. - -#### __Other Initiatives__ -The Atoms of Confusion team has three other initiatives currently -in process. -* Examining the influence of an atom’s position within a snippet of code on its - ability to create confusion. -* Identifying more potential atoms for testing and confirmation -* Determining the effect of developers' native languages on their perception of - code, and their susceptibility to atoms. -To follow the progress of the Atoms of Confusion project, check our - [web site](https://atomsofconfusion.com/). diff --git a/_posts/2018-06-26-SeattleRetire.md b/_posts/2018-06-26-SeattleRetire.md deleted file mode 100644 index fe96772c..00000000 --- a/_posts/2018-06-26-SeattleRetire.md +++ /dev/null @@ -1,60 +0,0 @@ ---- - -layout: article -title: "Retiring Seattle Testbed -- 10 Years and Thousands of Users." -subnav: blog -comments: true -tagline: 'Ten years ago, I (Justin Cappos) wrote the first few lines of code -in a Python based sandbox called Repy, intended for use in a new peer-to-peer -cloud environment. This started...' -author: 'Justin Cappos' -categories: - - 'Seattle' - - 'Sensibility' ---- - -Ten years ago, I (Justin Cappos) wrote the first few lines of code in a Python -based sandbox called Repy, intended for use in a new peer-to-peer cloud -environment. This started a project that featured code contributions from -over 100 participants, was used by over four thousand developers, and was -installed on tens of thousands of devices. - -Seattle provided students, developers, and educators a chance to -run code on different computers, phones, and servers around the world. -This was often used to relay traffic from a foreign location, giving -students the opportunity to explore how the Internet's performance varies -from different parts of the world. - -Due to the tireless effort of many students to improve Seattle's ease of -use and pedogical value, the Seattle Testbed was featured in many papers and -given a number of awards. Student contributions, in particular -undergraduates, always played a heavy role in Seattle's growth. This -includes many students who won prestigious awards for their research -(multiple CRA Outstanding Researcher and NSF Fellowship Awards), students -who got top industry jobs (at Microsoft, Facebook, Google, Apple, etc.), -even some who founded their own company (including Justin Samuel, who -founded ServerPilot and Armon Dadgar and Mitchell Hashimoto, co-founders -of Hashicorp). - -Of course, graduate students and senior researchers also played a major role. -Among the PhD students, Ivan Beschastnikh (now a professor at UBC), created -the first version of the Seattle clearinghouse. Albert Rafetseder did a lot -of research using the Seattle testbed as a PhD student and later became the -lead for the Seattle Testbed. His efforts were not only essential for -keeping the testbed running, but also helped to grow and strengthen the -platform. - - -Notable amongst the uses of Seattle testbed was the creation of Sensibility -Testbed. Yanyan Zhuang (now a professor of UCCS), led the Sensibility Testbed -project, a sensor-focused, smartphone variant of Seattle that uses much of -its codebase. Sensibility testbed has been the subject of many successful -hack-a-thons and has been an enjoyable project to work on. However, with -Seattle being retired, Sensibility also will not be continued. - -While Seattle (and Sensibility) will still be supported and used in the -classroom, no substantial development will be done other than bug fixes. -We appreciate the dozens of instructors that used Seattle (in about 100 -classes), the thousands of developers that built applications, and the -thousands of students that used Seattle in the classroom. Thank you all -for your support! diff --git a/_posts/2018-07-02-intotoKubecon.md b/_posts/2018-07-02-intotoKubecon.md deleted file mode 100644 index 7d942ab3..00000000 --- a/_posts/2018-07-02-intotoKubecon.md +++ /dev/null @@ -1,74 +0,0 @@ ---- - -layout: article -title: "Eliminating Weak Links in the Software Supply Chain" -subnav: blog -comments: true -tagline: 'There is a growing awareness in the open source software development -community that, even if a new or revised program is secured at each individual -step of its development an uncontaminated final product is still not guaranteed. The nature...' -author: 'Lois Anne DeLong' -categories: - - 'in-toto' ---- -There is a growing awareness in the open source software development community that, -even if a new or revised program is secured at each individual step of its development, -an uncontaminated final product is still not guaranteed. The nature of the software -supply chain is such that multiple entities will generally interact with a product -before it reaches the user. It is very common for a program to be written by one team of -developers, and then have a completely different team do the testing, packaging, and -distribution. So what happens in these “spaces in-between?” What is to prevent a -malicious actor from tampering with the product as it is in transit from one stage -to another? And, how do we know if the work was carried out only by those authorized -to do so, or if the final product matches its expected design and function? - -Over the past two years, a team at the Secure Systems Lab of NYU Tandon has been -developing and deploying a framework called in-toto. It is designed to -secure the integrity of the software supply chain from end to end, and at -every point in-between. Developed in collaboration with a research team at -the New Jersey Institute of Technology, the framework attests to the integrity -and verifiability of all actions performed while writing and compiling code, -and testing and deploying software. - -In roughly the same time frame, a team at Google was contemplating how to provide -a similar form of end to end protection for container images in the cloud. Enlisting -the help of JFrog, Red Hat, IBM, Black Duck, Twistlock, Aqua Security and CoreOS, Google -launched Grafeas, an API that “provides organizations with a central source of -truth for tracking and enforcing policies across an ever growing set of software -development teams and pipelines.” Build, auditing and compliance tools can use -the Grafeas API to store, query and retrieve comprehensive metadata on software -components of all kinds. - -At the most recent edition of KubeCon+Cloud Native Con Europe 2018, held May 2 -to 4 in Copenhagen, Denmark, Wendy Dembowski of Google and Lukas Pühringer of NYU, -delivered a presentation that explored the combined potential of these programs -to protect the “software supply chain security ecosystem.” The talk featured -real-life examples, documenting how these tools have been deployed on projects -for Debian, Arch Linux, reproducible builds, and Docker. Primarily, though, -the talk pointed towards how these tools can be combined to facilitate -continuous delivery in the cloud. As noted in the conference program, -continuous delivery has become a “prevalent concept in the cloud native -ecosystem,” by “drastically simplified and accelerated development and -eployment of software.” However, in doing so, it has created -“an attractive target for attacks.” Linking these independent projects presents -an opportunity to ensure enhanced software supply chain security in the cloud, no -matter what the delivery mechanism for the product may be. - -Though potentially these two products could be combined in a number of configurations, -in the talk Dembowski and Pühringer highlighted in the operation illustrated below. Basically, -it shows in-toto using its metadata to verify the supply chain of a software product, and then -pushing the results into a Grafaes attestation. The attestation—simply a “yes” or a “no” as to -whether it passed or failed verification—would be forwarded to the Grafeas server in the cloud -where it would instruct the “Admission Controller” as to whether or not it is safe to -admit to the cloud. - - - - -The value of such a merger would greatly reduce the attack surface for malicious players. -An article about the conference, written by Antoine Beaupré for LWN.net, quotes Pühringer as -saying that “Grafeas provides a well-defined API” that “allows the user access to where -you can push metadata”—be it package versions, builds, images, deployments or attestation, -such as those provided by in-toto. Because Grafeas is “well-integrated in the -cloud ecosystem,” while “in-toto provides all the steps in the chain,” Pühringer concludes, -“It seems natural to marry the two projects.” diff --git a/_posts/2018-09-13-SPIFFEanalysis.md b/_posts/2018-09-13-SPIFFEanalysis.md deleted file mode 100644 index e296462f..00000000 --- a/_posts/2018-09-13-SPIFFEanalysis.md +++ /dev/null @@ -1,18 +0,0 @@ ---- - -layout: article -title: "SPIFFE Security Analysis Made Public" -subnav: blog -comments: true -tagline: 'Over the past few months, I have been working with contributors to -SPIFFE and SPIRE to do a security analysis of their projects. Part 1 of our -analysis is now live...' -author: 'Justin Cappos' -categories: ---- - -Over the past few months, I have been working with contributors to SPIFFE and -SPIRE to do a security analysis of their projects. Part 1 of our analysis -is now live. - -Feel free to check it out! diff --git a/_posts/2018-09-21-SPIFFEanalysispart2.md b/_posts/2018-09-21-SPIFFEanalysispart2.md deleted file mode 100644 index 0dd9cc0c..00000000 --- a/_posts/2018-09-21-SPIFFEanalysispart2.md +++ /dev/null @@ -1,19 +0,0 @@ ---- - -layout: article -title: "SPIFFE Security Analysis Made Public (part 2)" -subnav: blog -comments: true -tagline: 'As discussed in the previous blog entry, I have been working with -contributors to SPIFFE and SPIRE to do a security analysis of their projects. -The second part of the analysis...' -author: 'Justin Cappos' -categories: ---- - -As discussed in the previous blog entry, I have been working with contributors -to SPIFFE and SPIRE to do a security analysis of their projects. The second -part of the analysis (which covers the attack prioritization and what steps -to take going forward) is now live. - -Find part two here! diff --git a/_posts/2018-09-24-poppaths.md b/_posts/2018-09-24-poppaths.md deleted file mode 100644 index 1451a876..00000000 --- a/_posts/2018-09-24-poppaths.md +++ /dev/null @@ -1,75 +0,0 @@ ---- - -layout: article -title: "Make Popular Paths Popular" -subnav: blog -comments: true -tagline: 'Nowadays, untrusted or buggy programs are everywhere, from plugins -running in your web browser, to a third-party Python library running on your -local machine. As awareness of the scope of this problem grows...' -author: 'Yiwen Li' -categories: - - 'Lind' - ---- -Nowadays, untrusted or buggy programs are everywhere, from plugins running -in your web browser, to a third-party Python library running on your local -machine. As awareness of the scope of this problem grows, many people are -turning to containers to protect their system from the security breaches these -untrusted programs can cause. Containers have become an attractive solution -because, as the name implies, they can provide isolation, and “contain” -programs run inside of them. With emerging containerization tools, -such as [Kubernetes](https://kubernetes.io) -and [Docker](https://www.docker.com), providing practical solutions for -deploying and running programs, more and more people are choosing containers. - -However, as use of the technology has expanded, it has become clear that -“containers” do not really contain all that well. Ultimately, they have to -rely on the underlying host kernel to perform core functionalities, such as -file system operations, network, etc. With so many [security vulnerabilities](https://www.cvedetails.com/product/47/Linux-Linux-Kernel.html?vendor_id=33) -inside the host kernel, giving it access to risky code could create -a huge problem. - -As containers cannot function without access to the host kernel, then the -question becomes can we identify where the bugs may be located and prevent -access to those areas? Almost two years ago, as part of the Lind project, -we developed and tested a metric that could predict where bugs are likely -to be located in the Linux kernel. The central idea is that the [“popular paths”](https://ssl.engineering.nyu.edu/papers/li_lind_usenix_2017.pdf ), -which refer to lines of code frequently executed by popular user programs, -contain fewer bugs and so, if access could be limited to these paths, the -chance of triggering bugs would be greatly reduced. - -In the work referenced above, we showed it was possible to build a virtual -machine that functioned with only limited access to the kernel. After obtaining -the popular paths for an Ubuntu system by running the top 50 packages from -the [Ubuntu popularity contest](https://popcon.ubuntu.com), we were able to -demonstrate that these paths in the Linux kernel tend to contain fewer -security bugs. Thus, by restricting access to risky “unpopular paths,” -bugs could be effectively prevented from being triggered in the kernel. -We defined a standard procedure, which we called “Lock-in-Pop,” that anyone -could follow to obtain the “popular paths” for a favorite virtual machine. - -So can we now apply this metric and the notion of restricting path access -to containers and other types of virtualization systems? The next stage of -popular paths research suggests we can. Currently, we are using the metric -to develop a “safe mode” of the Linux kernel, which can be used to run the -[LinuxKit container](https://github.com/linuxkit/linuxkit). In our preliminary -experiment, we ran 14 popular Docker containers from [Docker Hub](https://hub.docker.com/explore/) -with LinuxKit, and used the Gcov kernel profiling tool to capture the kernel -trace. We were able to obtain the “popular paths” for LinuxKit, and then -inserted the kernel panic() system call at the beginning of the -“unpopular paths” to prevent access to this risky code in the host kernel. -We modified more than twenty three thousand functions in the Linux kernel, -which accounts for about one third of its total number of functions. And we -verified that the LinuxKit container can still perform required functions -with our modified host Linux kernel. Though we are still running tests at -this time, we have reason to believe that with this method of trimming the -kernel, the issue of negotiating access to untrusted code can be addressed. - -For software developers and users who run programs in containers, staying -on the popular paths could significantly enhance the security of the host -kernel. With this in mind, we actively encourage researchers and developers -to try out the metric in container measurements and security evaluations. -If enough researchers can help us “make popular paths popular,” and are -willing to share their results, we believe the insights gained could lead to -more secure virtualization systems. diff --git a/_posts/2018-10-08-in-toto-tuf-book.md b/_posts/2018-10-08-in-toto-tuf-book.md deleted file mode 100644 index 38b8bf8f..00000000 --- a/_posts/2018-10-08-in-toto-tuf-book.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -layout: article -title: "in-toto and TUF in the new Kubernetes Security Book" -subnav: blog -comments: true -tagline: 'Now you can read on how to secure your kubernettes delivery using -in-toto and TUF in this new book' -author: 'Santiago Torres-Arias' -categories: - - 'in-toto' - ---- - -The potential role TUF and in-toto can play in container security has been -spotlighted in a new ebook from O'Reilly Media. "Operating Kubernetes Clusters -and Applications Safely," written by Liz Rice of Aqua Security and Michael -Hausenblas of Red Hat, addresses how to operate Kubernetes clusters and -generate container images securely. In describing this secure cloud-native -ecosystem process, the authors acknowledge the way tools like in-toto and TUF -are poised to strengthen cloud-native pipelines and container image -deliveries. - -The book can be downloaded free of charge from the [aqua security -website](https://info.aquasec.com/kubernetes-security) simply by registering. -It's a very good read for people interested in securing their Kubernetes -deployments. diff --git a/_posts/2019-01-18-in-toto-paris.md b/_posts/2019-01-18-in-toto-paris.md deleted file mode 100644 index 8623cfd7..00000000 --- a/_posts/2019-01-18-in-toto-paris.md +++ /dev/null @@ -1,44 +0,0 @@ ---- - -layout: article -title: "in-toto at the Reproducible Builds Summit-Paris 2018" -subnav: blog -comments: true -tagline: 'Last December, the fourth annual Reproducible Builds summit drew an international cross-section of computer professionals to Paris. Though ...' -author: 'Lukas Pühringer' -categories: - - 'in-toto' - ---- - -Last December, the fourth annual [Reproducible Builds summit](https://reproducible-builds.org/events/paris2018/) drew an international cross-section of computer professionals to Paris. Though most of the attendees were involved in some way with Linux distributions and open source software projects, their backgrounds and affiliations were quite diverse. Yet, they all shared the common perception that software builds should be "reproducible", meaning, as defined by [reproducible-builds.org](https://reproducible-builds.org/), that any party should be able to generate "bit-by-bit identical copies of all specified artifacts, given the same source code, build environment and build instructions." - -Reproducibility is a quality which has grown very desirable in recent days because by establishing a consensus on what a "correct" build entails, it also allows users to identify "incorrect" results that could indicate a system compromise. For attackers, a build system is a particularly attractive target, because, as with other steps in the software supply chain, they can impact millions of users with just one successful compromise. As advocates for supply chain security, the [in-toto](https://in-toto.github.io) team was happy to once more be given the opportunity to share its visions about software security at this year’s summit. - -### **From stripping timestamps to sketching out user stories** - -Reproducible Builds 2018 was a three-day, open-agenda, off-device event in which the full plenum periodically decided what [topics](https://reproducible-builds.org/events/paris2018/report/) were to be explored. Then participants branched off into smaller groups to discuss these issues in greater depth. These self-hosted sessions incorporated technical -workshops about relevant tooling, bootstrapping and hardware issues, theoretical discussions about terminology and trust models, and hands-on hackathons, where new package builds were made reproducible. - -According to several core members, for the first time in the series of summits, there was strong interest in how to make the merits of reproducible builds available to the end-users. It was perceived as a sign of maturity that this year’s summit spent less time on troubleshooting fundamental issues of reproducibility, such as timestamps, and more time on thinking about infrastructure that allows the user to determine whether a package she or he wants to install is "correct". - -### **Verifying reproducibility with in-toto** -Over the course of the summit, the in-toto team was excited to share how our framework can communicate and compare build results through its metadata format and verification protocol. Using in-toto tools, clients can verify the reproducibility of a product and hence agree upon the correctness of their installs. Furthermore, we had a chance to showcase and discuss an in-toto/Debian integration on which our team, together with community members (kudos to [kpcyrd](https://github.com/kpcyrd) and [Morten Linderud](https://github.com/Foxboron)) had worked in the weeks prior to the summit. - -Our proposed system consists of federated ["rebuilders"](https://salsa.debian.org/reproducible-builds/debian-rebuilder-setup) that autonomously rebuild Debian packages and generate corresponding attestations using the in-toto metadata format (see [rebuilder@NYU](https://reproducible-builds.engineering.nyu.edu/) and [rebuilder@University of Bergen](http://158.39.77.214/) for rebuilders in action). By using a custom ["in-toto apt transport method"](https://github.com/in-toto/apt-transport-in-toto), the installation client transparently fetches the relevant attestations from the available rebuilders and verifies them using a local policy file. The default action for this scenario may be to only install the downloaded package if enough trusted rebuilders agree on its contents. - -### **Debian is only the beginning** -The feedback we received at the summit was very positive. However, a few participants were curious about how our concept would work beyond the scope of Debian. For instance, one attendee mentioned that fetching rebuild attestations as an online operation during package installation was not feasible in his case. - -We easily dispelled this notion for summit attendees and are happy to reiterate it here. The in-toto framework does not prescribe how the attestations for a certain activity are to be aggregated or distributed in order to verify integrity and authenticity. In more general terms, while our proposed rebuilder setup and apt client plugin are indeed tailored to the specific needs of Debian, the underlying in-toto metadata format and verification protocol may be used in any scenario that requires signed evidence for an activity and a way to prove its authenticity and the integrity. - -### **Expanding the limits of reproducible builds** -As the conversations continued, another issue emerged. Even if a quorum of rebuilders agreed on a correct result, they could have all built with the same compromised source code. If this is the case, the flawed code could go unnoticed. The general community response to that concern seemed to be that such a concern is not in the scope of the Reproducible Builds project and that, furthermore, it is less problematic because source code is easier to audit than binaries. While both claims are valid, using in-toto compliant metadata to communicate build results would make it very easy to extend verification up the supply chain. in-toto link metadata is able to record the inputs, as well as the outputs, of each activity of the supply chain. It also provides policy language to continuously link the materials and products of all activities, from writing the source code, over to quality assurance, until building and packaging the binaries. - -We are excited to see the reproducible builds community grow, and we would encourage participants to continue this important work. As the in-toto team looks for ways to secure the whole software supply chain, strong individual links -- such as those reproducible builds can ensure -- will be needed to support these efforts. - -### **Get involved** -Ready to make your favorite software reproducible? Check out the -[Reproducible Builds homepage](https://reproducible-builds.org/) to learn how. - -Want to encourage reproducibility on a larger scale? If you have available computing resources, consider setting up a [rebuilder](https://salsa.debian.org/reproducible-builds/debian-rebuilder-setup). With each additional rebuilder who can independently attest for build results and publish corresponding in-toto metadata, users can be more confident that they are installing non-compromised packages. To get involved, [reach out to the in-toto team](https://github.com/in-toto/in-toto/blob/develop/MAINTAINERS.txt). diff --git a/_posts/2019-03-13-uptane-standardization.md b/_posts/2019-03-13-uptane-standardization.md deleted file mode 100644 index 8947bde9..00000000 --- a/_posts/2019-03-13-uptane-standardization.md +++ /dev/null @@ -1,45 +0,0 @@ ---- - -layout: article -title: "Setting a New Standard for Automotive Cybersecurity: IEEE/ISTO and Uptane" -subnav: blog -comments: true -tagline: 'Standardization represents an important step in the growth of a product or technology. It implies that a sufficient level of adoption has occurred to warrant ...' -author: 'Lois Anne DeLong' -categories: - - 'Uptane' - ---- -Standardization represents an important step in the growth of a product or -technology. It implies that a sufficient level of adoption has occurred to warrant sanctioned guidelines for its safe implementation and use. - -The Uptane secure software update strategy has now reached this level. At the -end of 2018, the Uptane Alliance was formally launched as the newest member of -IEEE’s [Industry Standards and Technology Organization](https://ieee-isto.org/press-releases/isto-uptane-2018/) -(ISTO). The nonprofit Alliance, which was formally voted -into existence on September 4, 2018, will take on the task of setting the future -direction of the framework’s research, development, and deployment. -As described on the [Uptane web site](https://uptane.github.io/), the group will -serve as “a neutral arbiter that oversees the formal standardization of Uptane, -and promotes security of software updates for the automotive industry,” - -The standardization initiative began in the late summer months of 2018 and, as -of year’s end, had produced a complete draft, which offers guidance on the -design and implementation of the Uptane framework. The document benefitted from -the input of 30-plus individuals employed by original equipment manufacturers, -suppliers, and relevant government agencies, who continue to provide needed reviews and modifications - -In the early months of 2019, the Uptane standards team began compiling best -practices for the deployment of the software update framework. Different from -the standards volume, the deployment strategies will be presented as suggestions, rather than as -the mandatory steps one must take to be Uptane compliant. - -The launch of the Alliance is just one of many milestones achieved by Uptane -over the past year. The technology’s integration into Automotive Grade Linux (AGL) -was a primary reason for the selection of NYU Tandon School of Engineering as an -associate member of both AGL and its parent organization, the Linux Foundation. -In announcing the new affiliation, a [press release](https://engineering.nyu.edu/news/nyu-tandon-joins-top-open-source-initiative-automotive-software-and-cybersecurity) from NYU Tandon describes AGL as, “on track to be the leading shared software platform across the industry for in-vehicle applications including infotainment, -instrument cluster, heads-up-display (HUD), telematics, autonomous driving, safety, and advanced driver assistance.” - -The Standards document is available for review by any interested party, either through the -[GitHub repository](https://github.com/uptane/uptane-standard) or as an [html](https://uptane.github.io/uptane-standard/uptane-standard.html) document. diff --git a/_posts/2019-08-12-bridging-the-gap.md b/_posts/2019-08-12-bridging-the-gap.md deleted file mode 100644 index 9aacc263..00000000 --- a/_posts/2019-08-12-bridging-the-gap.md +++ /dev/null @@ -1,76 +0,0 @@ ---- - -layout: article -title: "Building Better Connections Between Systems Researchers and Practitioners" -subnav: blog -comments: true -tagline: 'Academia and industry traditionally play complementary roles in the advancement of scientific and engineering knowledge and in the development of useful products to benefit society as a whole. While industry...' -author: 'Justin Cappos' -categories: - - 'Informational' - ---- - -Academia and industry traditionally play complementary roles in the advancement -of scientific and engineering knowledge and in the development of useful products to benefit -society as a whole. While industry, tied as it is to a bottom line, typically focuses on the -problems of today, including how to bring products to market, academia has the freedom to work on risky ideas that may require years of effort but have a high payout if successful. With funding from government agencies to fuel academic efforts, innovation can proceed at a faster, more advanced pace. Industry can then take the ideas from academia and expand the reach of the technology even further by using it as the basis for practical applications. In turn, this leads to more progress which raises more tax revenue than the government invested --- or at least this is the dream. - -It’s a dream that has worked very well in some scientific disciplines for many years. -Unfortunately, there is a bit of disconnect between academic and industrial practitioners -in some fields, including the field of computer systems, that makes such reciprocal -relationships harder to achieve. This disconnect is caused, in part by a lack of “common grounds” for sharing ideas and developing potential collaborations. Industry participants comprise only a small percentage of attendees at academic conferences, even if one counts attendees from research labs. This impression is bolstered not only by a short survey we recently conducted (discussed in greater detail below) in which roughly half the respondents from industry did not know what OSDI was, but also in a presentation by noted software engineer Bryan Cantrill who, in 2004, found himself the only industry-based presenter at [USENIX ATC](https://www.usenix.org/conference/atc16/technical-sessions/presentation/cantrill). - -Conversely, academics also make up only a small percentage of industry conference attendees. -In our study, only 8% of the academics participating in our study knew that Kubecon was the largest open source software conference. Perhaps unsurprisingly, despite substantial effort (including over $82.5 billion of government funding in calendar year 2015 alone) for [technology transfer](https://www.nist.gov/sites/default/files/documents/2018/02/02/fy2015_federal_tech_transfer_report.pdf), academic based initiatives are facing some challenges in making the “dream crossover” outlined above. - -This perceived disconnect is one that must be resolved if the computer systems field is to create the symbiotic relationships that have benefited scientific disciplines in the past. This article explores some potential causes for this disconnect. Given that a [recent study of economic benefits](https://www.wired.com/brandlab/2017/02/tech-transfer-lab-main-street/) -derived directly from academic-industry patent licensing reported that between 1996 and 2013 such efforts, “grew US gross industry output by up to $1.18 trillion and US gross domestic product by up to $518 billion, and supported more than 3.8 million US jobs,” resolving the disconnect should be a priority for professionals on both sides of the divide. - -Note that this blog post focuses specifically on the systems and systems security communities,where researchers build software that they hope industry will utilize. This is notably different from fields where the data is the interesting part of what is being studied (such as privacy research) or where the mathematical properties of the research are the primary outcome (such as theoretical computer science). Hence, the observations made here may not apply to other fields. - -### **Do academic and industry researchers in computer science see the world the same way?** - -To confirm what was initially just an observation, about the differing perspectives of academia and industry, I created [a survey](https://docs.google.com/forms/d/e/1FAIpQLSdtVvZI-z2NAONVMR2aDCCKP9y_zhTd0nLIzAvs4JVXalvPFw/viewform) that asks some basic questions about a few different systems technologies and conferences. The survey has 3 questions about conferences, 3 about technologies, and 4 questions about the -importance of certain technologies in practice, along with 3 demographic questions to -determine if the participant was a developer, graduate student, professor, etc. This survey was sent out to systems -researchers who publish in the systems security space, and to industry personnel in the cloud-native community. -Participants were asked to share the study with others who work in a similar role. - -After sending out the survey, we received 80 responses, 48 of which were from participants who said their primary field of work could be described as industry, 26 participants who described their primary work field as academia, and 8 who said their work was an equal mix of both. The findings below focus on the industry and academic participants and ignores the participants who indicated their work was a mix, as -this group is much smaller. - -Participants were asked not to look up answers or guess randomly, but these properties were not enforced or verified. - -As several aspects of the study, such as the participant selection, ability to look up answers, etc. can bias the results, I caution against reading too much into the exact numeric findings. However, the data (raw data is provided here) does suggest some obvious trends. - -First, we found some similarities between the groups. The vast majority of academics (96%) and industry participants (83%) were aware that USENIX runs conferences. There was also rough agreement that containerd, the technology that underlies the fundamental cloud technology of containerization, is an important technology, with similar numbers from each group. - -However, there were some major differences revolving around understanding of key technologies. About a third of academics knew that etcd and Istio are two widely used cloud technologies, while about 60% of industry participants knew these technologies. Conversely, 60% of academics knew about MULTICS, as compared to only 38% of industry participants. When asked if a technology was very relevant in practice to many companies, academics thought both Homomorphic Encryption (20%) and OpenFlow (28%) were much more important technologies than their industry counterparts (where both scored only 2.1%). Industry was more aware of rkt (a recently archived CNCF project) and found it more relevant, whereas most academics (72%) were not aware of the technology. - -Even more concerning, the numbers for professors were more pronounced than for graduate students. This means professors (who had an average of over 17 years or experience) had a much bigger divergence from industry than students (who averaged about 4.5 years). While a follow up study would be needed to test this, one possible reason for this disparity is that the longer professors are in academia, the more out of touch with industry practices they become. - -There were also differences in awareness about conferences. Only 24% of industry participants knew OSDI, one of the most prominent gatherings for systems researchers, was a conference, while 72% of academics polled stated this correctly. While both academia and industry were not overly aware that Kubecon was the largest conference (at least by some metrics), only 8% of academics knew this, while about 30% of industry participants were aware. Participants were asked to list a conference that academics would attend and to list a conference that industry developers would attend. Strikingly, the only conference that appeared on both lists was USENIX, where one of the 80 respondents (a professor) listed it as a place that industry personnel would attend. - -One industry participant summed up the lack of interaction between academia and industry as follows: “I just didn't have the slightest idea of a conference academics would visit. Considering I'm a wannabe-academic, that's not good.” So despite awareness and a desire to work together, there is still a substantial gap between academia and industry. - - - -### **Is deep interaction between industry and academia possible in the field of computer science?** - -There are some fields in computing that have healthy industry-academia interactions, for example the wireless measurement community. Academia was willing to take a chance on the millimeter wave wireless band and it paid major dividends, with the new 5G standards supporting about 1000x better bandwidth than 4G. In addition, many academic researchers and testbeds were heavily involved in helping to form the new 5G specification. Just as in the ideal model framed in the introduction, academia explored a relevant area, with industry heavily engaged from the earliest days to help shape the problem domain. Then, industry commercialized the findings, which led to a major benefit to all smartphone users, more sales of smartphones and equipment, and finally more government revenue from taxes to reimburse initial efforts. Clearly, when the model works, everyone wins! - -### **Why aren’t computer systems professionals in academia and industry talking to each other?** -It’s hard for two groups to talk if they don’t occupy the same space. As started earlier, academic researchers are essentially nonexistent at major industry conferences focused on building distributed systems, like KubeCon and DockerCon. Similarly, very few industry personnel show up at academic venues. Without speaking, it is no wonder that there isn’t the degree of interaction between academia and industry that would be desirable. -But, the gap can be bridged with a better understanding of exactly what type of work is exciting and relevant to each of these sides. In Part II of this exploration, I will posit a few reasons why academia and industry may value work differently, and thus may not be attending the same conferences. - -### **Conclusion** - -This writeup was created to start a discussion about how academics might better collaborate with industry. I’ve created a [Google group](https://groups.google.com/forum/#!forum/systems-research-in-practice/join ) to further this discussion, where we can discuss experiences, strategies, successes, and failures. I will also be at USENIX Security 2019 in Santa Clara this week and would love to chat with people there in person. I’m curious to hear from others about the extent to which they see the same problem and their attempts (both successful and unsuccessful) to bridge the gap. Most importantly, this forum can provide a place to toss around ideas about how to move forward. Working together on the problem, I’m sure we can greatly improve the state-of-the-art! - -### **Acknowledgments** - -I'd like to thank Lois Anne Delong who helped with the survey and also edited -this piece as well as Santiago Torres-Arias, Damon McCoy, and Rachel Greenstadt -for their insightful feedback. - diff --git a/_posts/2019-09-03-bridging-pt2.md b/_posts/2019-09-03-bridging-pt2.md deleted file mode 100644 index 94625699..00000000 --- a/_posts/2019-09-03-bridging-pt2.md +++ /dev/null @@ -1,76 +0,0 @@ ---- - -layout: article -title: "Building Better Connections: Part 2" -subnav: blog -comments: true -tagline: 'In my previous post, I described how industry participants and academics within the computer systems field have very different understandings of the world today, and rarely interact at common venues...' -author: 'Justin Cappos' -categories: - - 'Informational' - ---- -In my previous post, I described how industry participants and academics within the computer systems field have very different understandings of the world today, and rarely interact at common venues. This post takes a look at one potential reason for such a gap:the differences in how academics and industry value a project. I also offer some suggestions for finding common ground for sharing expertise and ideas. - -### **Why do academic and industry conferences value different types of work?** -Put succinctly, academia and industry professionals value different types of research, and this creates very different criteria for selection of presentations for conferences and journals. (This remains true despite the fact that both types of conferences are often equally rigorous, with an acceptance rate of ~20% at top venues for both.) To address the gap between these two worlds, it is important to be aware of what excites researchers in each arena. The table below summarizes the characteristics of papers presented to academic vs. industry audiences. These points are discussed further in the next two subsections. - - - -#### *What Academia Values in Research Papers* -So, what do academics value in a research project? A near universal reviewing category for computer science conferences is the *novelty* of the work. In my experience, the novelty score has a high degree of correlation with whether or not a paper is accepted. “Novelty” here means the paper is defined as sufficiently “different” and “unique” in comparison to prior work that other academics have done on the topic. Work that builds on existing products or technologies is likely to be described as “incremental," which is a dirty word in academia and, for many reviewers, a strong basis for rejection. Hence my group typically rebrands systems with a new name for each publication to make them seem newer to the academic community (of course while still clearly describing the differences between this and our prior work). - -This relentless drive to produce something “new” can, unfortunately, detach the research initiative from any type of practical application. The academic stereotype of having “a solution looking for a problem” does sometimes apply in practice (including in some of my early work). Many academics specialize in techniques (i.e., machine learning, cryptography, blockchain, etc.) instead of problem domains (i.e., securing healthcare, improving cloud native services, etc.). This could lead an academic to apply their technique to a domain in which they are not experts. The lack of expertise leads to misunderstandings about a problem domain. Solutions forged out of confusion can never be applied in practice, since real world constraints have not been considered. It is quite rare to have academic code actually used in production before publication, except perhaps in the form of a very early stage “prototype,” and an actual deployment in a real environment is even rarer still. For example, of the 45 papers presented at NSDI 2016, the only major systems conference with a track soliciting operational experience, only 7 described a production deployment. Five of those with deployments were from technology companies, Microsoft, Conviva, Facebook, Baidu, and Google, and so did not release source code. (The lack of source code is a noted problem in our field since it may make it more difficult for other researchers to replicate the results.) [Our paper](https://www.usenix.org/system/files/conference/nsdi16/nsdi16-paper-kuppusamy.pdf) in that conference was one of only two that described a system used in production, and that released its source code. - -Further compounding this disconnect from practical problem solving is that the motivation for most academic work, and/or the problem to which it is matched, usually comes from prior academic papers. Without validation through a real world deployment, the assumptions behind earlier work is rarely challenged. After all, the academic researchers who wrote the cited paper obviously thought the motivation and problem constraints were real enough for them, and they are likely to be among the reviewers for the new paper. This runs the risk of compounding and propagating errors made by other academics. - -Another characteristic of conference papers based on academic research projects is a certain amount of unwarranted exuberance about the potential impact of the work in order to increase the “buzz” surrounding it. Academic research projects often tend to hype the potential impact of the work by using jargon that adds little to any meaningful discussion of what the process or product actually does. My experience has been that few academics want to read a paper about “improving the security of the software update process.” It is much more appealing to read a paper about a “compromise-resilient, community-repository aware, cloud native software update framework,” even though these two papers describe exactly the same system. - -Buzz is likely to grow louder if the solution involved is deemed “elegant.” Clever design ideas also tend to be a major factor in paper acceptance. If a “hack” is needed to make something work, this is often viewed as a blemish, and so will likely be glossed over or excised from a paper. Unfortunately, making software work often necessitates such tweaks, hacks, and changes. And, making software work in a real world environment will likely require more “hacks” From a purely academic standpoint, this effort is not at all worth the additional likelihood of paper acceptance. - -Lastly, in academic papers, the code behind the product or technology presented is generally treated as less important than the idea it supports. The term “grad student code” is effectively a shorthand for undocumented, buggy, partially-implemented software. Recent efforts to reproduce academic results using the software provided by researchers confirms how rudimentary such code tends to be. For example, in 2011 FSE, a top software engineering conference, had a [50% replication rate](https://cacm.acm.org/magazines/2015/3/183593-the-real-software-crisis/fulltext)for academic papers amongst those paper authors who chose to have their results replicated. - -Of course, the lack of reproducibility has not gone unnoticed. There are some excellent efforts ongoing to combat this issue. The ACM [launched an initiative](https://dl.acm.org/citation.cfm?id=2812803) to attempt at least “weak repeatability” of the results of 601 papers published in the association’s conferences or in their journals. “Weak repeatability” was defined simply as “do the authors make their source code available, and will it build?” The study’s overall conclusion was that only 32 to 54% of the research presented in the paper could be reproduced, depending on what classification of “reproducible” was applied. When speculating on why this was the case, the article cites an [earlier paper](https://journals.sagepub.com/doi/pdf/10.1177/1745691612462588) whose author summarized the profession’s attitude as follows: “Innovative findings produce rewards of publication, employment, and tenure; replicated findings produce a shrug.” - -While achieving reproducibility and the transparency needed to address the problem is a very complex issue, and any potential solution is beyond the scope of this document, one could argue that strong ties with industry might be a good first step. When working alongside individuals for which the ultimate criteria for any proposed solution is “does it work?,” reproducibility is likely to place somewhat higher in a researcher’s priorities. - -#### *What Industry Professionals Value in Research Papers* -The best industry research papers largely mirror what practitioners value in a piece of software. This means industry-based conferences seek presentations that have, at their core, a stable, working product that solves a real problem. - -The quality of the code base and community of the system described in a presentation is paramount. Software must be engineered in a way that is maintainable. The developer should follow reasonable code style guidelines, reviews must be performed, and all work on the code, including patches or updates, must be well-documented. This attention to code quality is particularly crucial for open source software because a company that deploys a product is committing to fix whatever issues arise in their deployment. - -Another hallmark of industry papers is that the technology used as components of a system should be proven, simple, and well understood. *;login: Spring 2019* had an article titled [“Achieving Reliability with Boring Technology,”](https://www.usenix.org/publications/login/spring2019/mangotfeatures) that described the desire of most developers for a solution proven to work. Risking your company’s fortune or your job on experimental technology is a good way to end up unemployed. What all of the above suggests is that industry has much less tolerance for novelty. For industry, version 2.0 of a popular, useful piece of software is much, much more attractive than any new piece of software. - -Related to its preference for proven technologies, industry research initiatives embrace an attention to a piece of software that carries through to the maintenance stage of its lifecycle. How actively a piece of software is maintained is a key to its value. This is often evaluated by looking at the userbase of a product. If it is being used in production by a substantial number of invested participants, then it is likely that bugs will be fixed and features will be regularly added without the company’s direct involvement. All companies that use the software will improve it, and all will collectively benefit. So a large, invested userbase is a telling indicator of what industry values. Many companies will not use code that does not have one. This is a clear indicator that industry values proven reliability over novelty or elegance. - -Another hallmark of industry research is that its criteria for adoption is the ability to solve a real world problem substantially better than the existing state-of-the-art solution. There are often multiple ways to solve a problem, including non-technical procedures. Industry concerns will compare costs and risks to their current practices before considering adoption of something new, but it is rare to see an academic paper that quantitatively examines the legal, business, insurance, certification, etc. differences involved in adopting a new technology. - -The factors mentioned above acknowledge that switching technologies always has costs and poses risk. Even if you give a tool away for free, getting people to understand how it works, how to set it up in their environment, etc. is a non-trivial undertaking. It can be extremely time consuming, costly, and challenging to get parties to adopt even widely used, secure, professionally engineered software. Having legacy code makes it very hard to move adopters to a new technology, especially when their existing system “just works.” - -It should be obvious from the differences in priorities between industry and academic practitioners that connecting researchers on both sides of the divide will not be simple. In the next section, I present some don’ts and do’s for building bridges and encouraging effective technology transfer programs. - -### **How not to support technology transfer** - -I have seen a few patterns that do not work well for technology transfer: - -+ **“Open sourcing it” by placing graduate student code on a website.** What company will spend engineering time going through academic websites and code to try to see if there is a useful prototype? Even if the code is cleaned up, without a meaningful user base or solidly engineered codebase, there is little chance of transfer. - -+ **Expecting industry to communicate their problems to academics.** From what I have seen, industry’s goals when working with academia are first and foremost to get high quality interns they can hire to help address the talent shortage. The second goal is to explore “far out” areas and concepts to mitigate and understand long term risk. In other words, should Company X examine using an IoT-hosted quantum homomorphic blockchain? Nevermind that Company X’s business involves making shoes. If a manager there read an article about how technology is changing, they don’t want to have a startup come out of nowhere to take their business (such as when streaming services eliminated the need for video stores like Blockbuster). - -+ **Taking existing academic work and transferring it.** For the reasons discussed above, a substantial part of the academic literature does not connect well with existing problems and scenarios in industry. No matter how well engineered the software is, if it does not solve a real world problem in a meaningful way, it will not get traction. It is uncommon for an academic team to go through the exercise to find “product market fit.” Even if academics want to give away the technology for free, if it doesn’t solve a real world problem much better than existing technologies, the switching cost will outweigh the benefit. - - -### *How can we work toward practical impact / technology transfer* - -+ **First and foremost,** we begin by working in practice with industry. This means actually using industry tools and techniques in a real world context. The goal here is to really understand the technical problems inherent in the industrial space in which we are working. We need to see the total landscape of stumbling blocks and workarounds that practitioners must navigate to understand the actual goals and constraints in the environment. Often it is possible to get this experience by collaborating closely with practitioners, such as by working together on aspects of a widely-used open source project. However, the goal of such a collaboration should be to better understand the space, not to transition a specific technology or project into use. Contributing in an altruistic way is also an important way to build credibility within the broader community. - -+ **After you really understand a problem domain, look for opportunities to improve the system in a substantial way.** In my experience, dealing with issues that are minor pain points for a lot of adopters is perhaps the easiest way to go. (For example, I’ve worked a lot on the security of software update systems, which isn’t something any company competes on, but everyone desires to some degree.) These sorts of pain points are often invisible to individual users because they are such a small part of every effort. Yet, building an effective solution to such problems gives one a broad potential userbase. - -+ **Work altruistically with many potential adopters.** Individual companies and organizations have their own priorities, and those priorities have a way of shifting. One of the first major groups to recognize the value of [TUF](https://theupdateframework.github.io/) was the Python community. In 2013 and 2014, we worked with the Python packaging community to create PEPs (Python Enhancement Proposals) for two different deployments of TUF. This work generated quite a lot of positive discussion in the Python community that spread to the broader tech community. In that timeframe, however, Python was in the process of moving to a new version of their repository software and so wanted to complete this process before integrating TUF. As a result, as of August 2019, the Python community still has not integrated TUF into production use, although it appears that [work](https://github.com/python/request-for/blob/master/2019-Q4-PyPI/RFI.md) may be starting now. Yet, the discussions and awareness generated by our work with Python led to several other communities integrating TUF, like Docker. After Docker’s integration, many other adoptions followed in the cloud space, including Microsoft Azure, Oracle, Cloudflare, Digital Ocean, RedHat, IBM, Datadog, and others. I believe it is much less likely this would have happened without the discussions in the broader community, which were followed by Docker’s security team. We worked with many different adopters and listened to their problems and concerns to help speed this transition along. - -+ **Expect the process to take awhile.** Unfortunately, transitions into an existing code base often take many iterations with potential adopters before they are used. It took nine attempts before our patches, which produced a new signing model for git and fixed serious flaws, were adopted. This is surprising, given that they were done by a member of Arch Linux’s security team who has fixed serious flaws in many other pieces of software! In another example, a bug in the networking library for Python took several years to fix, despite the fact that a detailed description and a patch for the bug were provided. Software projects have their own timelines and goals, so you need to be engaged over a period of time. You are effectively asking them to understand, approve, and then agree to maintain the piece of code you provide them. So, be understanding when an overwhelmed maintainer isn’t jumping to add your code (and everyone else’s) into their codebase. - -+ **Engineer software right from the beginning.** We have used professional software engineers, followed code style guidelines, employed code reviews, and just had an overall focus on code quality and testing from our early days. This has led to multiple software projects that have been used for up to 10 years, as well as codebases that are reused in production in many domains. If you do not create software that is ready for production use, who will ever use it? Also, it has been my experience that clean up is often more work than redoing the effort. Be wary of “I’ll test it later” or the even more worrying cry of “I’ll comment it later.” This is a recipe for buggy software that can impact the performance or security of a product, or just produce false results overall. We worked on several pieces of research code with bugs that would have substantially changed our results had we not caught errors when with testing. Of the five papers by other authors I have replicated, I found errors in four of them. Correct software engineering is paramount for the accuracy, reproducibility, and production use of software. - -### **Conclusion** -If you would like to add your insights to this discussion, we encourage you to join our [Google group](https://groups.google.com/forum/#!forum/systems-research-in-practice/join). Conversations in this group could suggest tangible actions to move industry/academia interaction forward. diff --git a/_posts/2019-12-03-paper-walls.md b/_posts/2019-12-03-paper-walls.md deleted file mode 100644 index 18d0d1de..00000000 --- a/_posts/2019-12-03-paper-walls.md +++ /dev/null @@ -1,32 +0,0 @@ ---- -layout: article -title: "Tearing Down the Paper Walls: Valuing Practical Problem-solving in Academia" -subnav: blog -comments: true -tagline: "While at KubeCon last week in San Diego, one " -author: 'Justin Cappos' -categories: - - 'Informational' ---- - -While at KubeCon last week in San Diego. one of the 12K attendees present came up to me and said, ''You're at NYU. What are you doing at KubeCon?'' While I was unsurprised by the question, I was surprised to learn that the attendee was a PhD student at a major US university. [Anonymity provided so as not to embarrass anyone.] The student was even more surprised that an academic like myself had created two CNCF projects and was active in the community. - -I met with the student's advisor a bit later and chatted about what brought them to the conference. The response was that the professor and student were here to learn about how the technology they were researching is used in practice. Their hope was that this might help them write better papers for a major academic conference. However, the professor realized that academia wasn't looking at the same problems. Industry solutions are complex as a necessary part of handling real world problems and thus are both too difficult to package into academic papers, and are harder still to improve on. So as I understand it, the professor and student will not be back to the conference because what they heard and saw at this meeting was not sufficiently useful in their endless quest to publish papers. - -### **So what to do?** - -I refuse to fall into this trap and would strongly advocate for reversing this trend. The real world is the real world, despite what some in academia may like to think. It is our place to enact change, have a positive impact, and help people, or else what use are we to society? - -I also refuse to propagate a belief in the value of papers over real world impact in the students I advise. It is certainly possible to do both. For example, my PhD student, Santiago Torres-Arias, has two first author USENIX Security papers (see https://ssl.engineering.nyu.edu/papers/torres_toto_usenixsec-2016.pdf and https://ssl.engineering.nyu.edu/papers/torres-toto-usenix19.pdf), and papers at ASIACCS, CODASPY, and NSDI. More impressively, his [NSDI paper](https://ssl.engineering.nyu.edu/papers/kuppusamy_nsdi_16.pdf) was on improvements to [TUF](https://theupdateframework.github.io/), a secure update system that is used in the cloud by such prominent adopters as Amazon, Google, Microsoft, IBM, Docker, VMware, RedHat, Digital Ocean, DataDog, Cloudflare, and others). He is also the lead for the [in-toto](https://in-toto.io/) project, which is used by thousands of companies (Twitter, NBC, PBS, DataDog, Activision, Conde Nast, etc.). Our work on [git attacks](https://ssl.engineering.nyu.edu/papers/torres_toto_usenixsec-2016.pdf) caused the resulting defense to be deployed by git in 2016, protecting effectively every piece of software that uses it. - -We undoubtedly could have had more publications in the same timeframe had we opted to ignore practical impact. We could have had even more papers by simply worrying about the ''least publishable unit.'' That is, getting the most number of raw papers, for the least effort, often by creating derivative work. What is odd is I have often heard academics decry the ''least publishable unit,'' while actively avoiding the hard work needed to have practical impact. I fail to see a strong distinction between those who do not work toward practical impact and those who work toward the ''least publishable unit.'' In both cases, it is just writing pieces of paper for which the only likely audience are other academics. - -When evaluating tenure cases, I will take the practical impact of the candidate strongly into account. There are three main dimensions I would like to see in evaluating practical impact: - -1. *An externally measurable contribution.* This could be something like an open source project where one can observe that source code has been merged. I have reviewed several proposals where the faculty claimed such impact, but I was unable to find evidence to support that claim. A reference to a commit or a link to a blog post the company made about integration provides a direct means for verification. - -2. *Substantial positive impact.* Any change integrated into a piece of software should have a substantial positive impact. Improving important software in significant ways is a key dimension that should be valued. - -3. *Research relevant addition.* The difference in the code should be in an important direction (such as a security, usability, or efficiency improvement) and involve a substantial amount of creativity and thought. Designing and deploying the first system that protects against an emerging threat is a researcher trailblazing the way. Adding a missing feature to a piece of software which its competitors already have, not so much. - -I ask other faculty to consider using these criteria when evaluating practical impact. A few faculty giving practical impact the appropriate emphasis can have a big impact on the next generation of researchers. diff --git a/_posts/2020-02-03-transparent-logs.md b/_posts/2020-02-03-transparent-logs.md deleted file mode 100644 index 8026a014..00000000 --- a/_posts/2020-02-03-transparent-logs.md +++ /dev/null @@ -1,90 +0,0 @@ ---- -layout: article -title: "Contrasting Transparent Logs and The Update Framework" -subnav: blog -comments: true -tagline: "When and where would you use one over the other?" -author: 'Trishank Karthik Kuppusamy and Marina Moore' -categories: - - 'Informational' ---- - -## TLDR - -Both Transparent Logs and The Update Framework were designed to protect end-users from a compromise of package repositories, but ultimately reflect different assumptions about how security should be managed. Transparent Logs are better at providing an immutable history of packages, which lends itself to third-party auditing. The Update Framework is better at providing a higher degree of compromise resilience, as well as built-in procedures for recovering from a compromise. One can obtain the best of both worlds by combining both systems. - -## Introduction - -In recent conversations about the effectiveness of [The Update Framework (TUF)](https://theupdateframework.io/) à la [Docker Content Trust](https://docs.docker.com/engine/security/trust/content_trust/) and [Uptane](https://uptane.github.io/) as a method to prevent package tampering, the idea of [Transparent Logs (TLs)](https://research.swtch.com/tlog) à la [Certificate Transparency](https://www.certificate-transparency.org/) and the [Go sumdb](https://go.googlesource.com/proposal/+/master/design/25530-sumdb.md) often came up. It became clear to us that there is some genuine confusion about what these two technologies have to offer, and which might be better, given that different attacks often require different types of defenses. In response, we have compiled the strengths and weaknesses of each technology to help repository managers select the best strategy for securing the packages they host. - -Both TLs and TUF can be used to protect packages from being tampered with by man-in-the-middle attacks (MitM) attackers. Both systems were designed around the principle that package repositories are not naively trusted by package managers. Both systems provide signed hashes of packages, so that package managers can verify the integrity of downloaded packages with some degree of authenticity. In contrasting these solutions, the primary difference appears to be that TLs seek to provide immutable history via append-only logs, as well as third-party auditing of these logs, whereas TUF concentrates on providing a higher degree of compromise resilience, as well as recovery from compromise when, and not if, it happens [^1]. - -In plain terms, if the main goal is to have a public list of packages that different auditors check, TLs provide this functionality. If one wants to be able to limit the damage from a repository or key compromise and securely recover from this, TUF provides this functionality. - -## Threat model - -To better understand the differing approaches of TLs and TUF, we need to first characterize the nature of the threat both are designed to address. Figure 1 depicts the relationships between the different parties and the two systems. - - - -**Figure 1**: A rough illustration of the various relationships between package developers, package repositories, TUF metadata repositories, Transparent Logs, attackers, and end-users. - -We define a compromise as a situation where attackers can control all of the following: - -1. The network connection between end-users and Transparent Logs, The Update Framework metadata repository, and / or package repositories. They may do this with MitM attacks, by exploiting weaknesses in cryptographic keys or libraries, and / or by breaking into the endpoints themselves. -1. One or more online signing keys accessible by automation on the endpoints above, but no offline keys kept on, say, [Hardware Security Modules](https://en.wikipedia.org/wiki/Hardware_security_module) or hardware tokens with GPG support off the endpoints above. - -## Immutable history and third-party auditing vs. compromise resilience and recovery - -TLs provide some compromise resilience in that they prevent old logs containing old versions of packages from being deleted, even if the log itself has been taken over. This is in addition to preventing MitM attacks on end-users. Furthermore, third-party auditing allows for detection of new, malicious versions of packages [^2]. And, it does all of this without requiring package developers to sign anything. However, although third-party auditing can help to detect these malicious, new versions of packages, it does not prevent them from being added in the first place [^3]. In addition, third-party auditing requires an interested third party to provide resources toward detecting malicious packages. This auditing model does not scale to small projects with fewer users or resources. - -TUF can be configured (à la [Diplomat](https://www.usenix.org/node/194973), [PEP 458](https://www.python.org/dev/peps/pep-0458/), and [PEP 480](https://www.python.org/dev/peps/pep-0480/)) to provide a higher degree of compromise resilience in that attackers cannot tamper with all packages without being detected. To do so, though, some developers must be willing to sign their packages independently of the repository. Furthermore, it also provides explicit procedures out-of-the-box to recover from a compromise when, and not if, it happens. However, TUF relies on checks performed on metadata from the repository. Without an immutable history and third-party audits, a compromise of multiple keys means that attackers can replace metadata and packages on the repository in a way that is not immediately detectable by package managers (e.g., by dropping packages). This is more problematic in cases where the repository automatically signs metadata for package developers. - -In a nutshell, the security provided by TLs is heavily dependent on the ability of large organizations, such as Google, to prevent a repository compromise. Since TLs do not prevent new, malicious versions of packages from being added in this scenario, and do not immediately provide ways to recover from such attacks, prevention of these attacks becomes much more important. - -In contrast, TUF can be configured so that malicious versions of packages either cannot be added to the repository, or, if added, will not be trusted by package managers. This simple but significant difference means that security can easily be maintained by independent or nonprofit package repository maintainers with significantly less resources than those of large tech organizations, such as Google. In exchange for this simplicity, the security of TUF relies on repository administrators, as well as package developers who choose to opt into it, maintaining the security of offline keys. - -Therefore, situations where one strategy might be preferable over another may be dictated by which of these purposes is most important. We also propose that, because the two technologies are complementary, adopting both could offer enhanced security against a larger variety of attacks than using just one. - -## Getting the best of both worlds - - - -**Figure 2**: A rough illustration of the various possible relationships between a package repository P, a TUF metadata repository T, a Transparent Log L, a mirror M, and auditors. - -TLs and TUF both aim to protect users from malicious packages. TLs provide immutable history, as well as built-in third-party auditing, whereas TUF provides a high degree of compromise resilience, as well as built-in procedures for recovery. A package repository using both systems could gain all of these benefits. - -More precisely, consider a package repository P, a TUF metadata repository T with [consistent snapshots](https://github.com/theupdateframework/specification/blob/master/tuf-spec.md#7-consistent-snapshots), a transparent log L, and a mirror M, as illustrated in Figure 2. We assume that T, L, and M are independent. Whenever P publishes a new package, metadata must be updated accordingly on T. This ultimately means producing new version N of the timestamp metadata that T must submit to L and M. L must append hash(N), whereas M must download all new metadata that follow from N. P and T are free to garbage collect obsolete metadata and packages, whereas L must archive all timestamp metadata produced by T, and M must archive all packages and metadata ever published by P and T respectively. - -The most important property of this system is that auditors now have a tamper-proof record of who published what and when. Auditors can query L for the hash(N) of every version N of the timestamp metadata they are interested in, query M for all TUF metadata that follows from N, and audit the metadata for particular packages. - -As a desirable side-effect, auditors can now also detect [forking attacks](https://www.usenix.org/legacy/event/osdi04/tech/full_papers/li_j/li_j.pdf), where T may have shown different timestamps to different end-users. They may do so using the same method outlined in the previous paragraph (assuming that [hashes](https://www.usenix.org/conference/atc17/technical-sessions/presentation/kuppusamy) are recorded in the snapshot metadata produced by T). - -## Comparison matrix - -The table below offers a side-by-side comparison of the security features of the two systems. - - - -In particular, the Datadog TUF and [in-toto](https://blog.acolyer.org/2019/10/02/in-toto/) integration mentioned in the last column is discussed in more detail [here](https://www.datadoghq.com/blog/engineering/secure-publication-of-datadog-agent-integrations-with-tuf-and-in-toto/). As far as we know, it is the first compromise resilient packaging system that detects attacks _anywhere_ between developers and end-users. - -## Conclusion - -TLs and TUF both help secure package repositories, but their priorities and goals differ, and so provide complementary virtues. When used separately, TLs provide an immutable history of a repository with third-party auditing, whereas TUF provides better compromise resilience and procedures for recovering from a compromise. Used together, these technologies can provide all of the aforementioned properties. If anyone would like to implement a combination of TLs and TUF, or discuss the differences between these systems, please reach out to us on [the TUF mailing list](https://groups.google.com/forum/?fromgroups#!forum/theupdateframework). - -## Acknowledgements - -We would like to thank Justin Cappos, Nick Coghlan, Lois Anne DeLong, Ernest W. Durbin III, Sumana Harihareswara, Joshua Lock, Santiago Torres-Arias, Filippo Valsorda, and the Python community for their feedback. - -## Changelog - -1. **2020-02-05**: Added a footnote about the shared design goal of [removing trust](https://github.com/secure-systems-lab/ssl-site/issues/96). -1. **2020-02-06**: Added a footnote that covers the procedure for [recovering](https://github.com/secure-systems-lab/ssl-site/issues/95) from a key compromise in TLs. - -## Footnotes - -[^1]: Since this article was published, we have [learned](https://github.com/secure-systems-lab/ssl-site/issues/95) that the procedure for recovering from a key compromise for TLs is, to _some_ extent, comparable to PEP 458 security model for TUF. It is best illustrated using the following scenario. Suppose that before time K, online key X was used. At time K, this online key X is compromised. After time K, we switch trust to online key Y instead. Using TLs (e.g., Go sumdb), the log will be signed using both online keys X and Y. This is so that [Go binary distributions](https://golang.org/doc/install) released before time K that are effectively baked with X can continue to trust the TL. Subsequent Go binary distributions will be baked with Y instead. Presumably, X will be considered deprecated / compromised, and some grace period will be allowed before X is completely revoked (i.e., no longer used to sign the TL). Using TUF (PEP 458), there is no need to issue a new software update to permanently replace trust in X with Y. The TUF metadata repository administrators would use the offline keys for the `targets` role (not even necessarily the higher-level `root` role) to do so. Both old and new versions of the package manager can continue to permanently switch trust from X to Y, despite [backwards-incompatible](https://github.com/theupdateframework/taps/pull/107) changes to TUF metadata. There is no need to deprecate X and offer a grace period because it is no longer used. We feel that this is a subtle but important difference. - -[^2]: A major design goal for Google was to make sure that the community would not have to trust them blindly, and thus these mechanisms are a means to an end, which is [removing trust](https://github.com/secure-systems-lab/ssl-site/issues/96). Both TLs and TUF share this design goal of removing as much trust as possible from the package / metadata repository or log. - -[^3]: In fact, assuming that the TL and package repository are independent, this does not even require attackers to compromise the TL. Since the TL would [automatically](https://go.googlesource.com/proposal/+/master/design/25530-sumdb.md#checksum-database) fetch missing versions of packages, all attackers would have to to do is to add malicious versions of packages to the repository (such as the GitHub repository belonging to the package developers), and somehow convince developers to refer to these malicious versions (say, by publishing new tags on GitHub). In this sense, TLs still depend on [Trust-On-First-Use (TOFU](https://go.googlesource.com/proposal/+/master/design/25530-sumdb.md#module-authentication-with)). While Go encourages pinning packages using [Semantic Versioning](https://semver.org/), which ameliorates the issue to some extent, the fact remains that malicious versions of packages can still be added automatically, which is especially problematic when package managers such as [`pip`](https://pypi.org/project/pip/) automatically try to find the latest versions of packages. diff --git a/_posts/2020-07-03-grad-phds.md b/_posts/2020-07-03-grad-phds.md deleted file mode 100644 index 187dce07..00000000 --- a/_posts/2020-07-03-grad-phds.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -layout: article -title: "Good-bye, Santiago and Dan: Some Parting Wisdom from our Most Recent Ph.D. Graduates" -subnav: blog -comments: true -tagline: "This past May, two Ph.D. candidates from the Secure Systems Lab, Santiago Torres-Arias and Daniel Gopstein, successfully defended their dissertations..." -author: 'Lois Anne DeLong' -categories: - - 'Informational' ---- -This past May, two Ph.D. candidates from the [Secure Systems Lab](https://ssl.engineering.nyu.edu/), Santiago Torres-Arias and Daniel Gopstein, successfully defended their dissertations and were awarded their Ph.D.s. While we are excited about this milestone in their lives, there is no doubt they will be missed. Not only did each make significant research contributions to SSL projects during their stay, but both could also be relied on as resources for others in the lab. Santiago was always quick to chime in when others mentioned particular research challenges in their progress reports. Dan not only set up, and for many years maintained, our SSL website, but also offered assistance more than once for those interpreting and analyzing data from human subject user studies. *(On a strictly personal note, both of them also rescued me from some self-inflicted LaTex wounds in my early days of working on GitHub, for which I am eternally grateful.)* - -Before leaving NYU Tandon, both Santiago and Dan shared some thoughts on their time at the lab. In separate interviews they talked about what brought them to the lab, some high and low points of the experience, and what lessons were learned along the way. - -#### Getting to SSL - - - -Santiago had initially come to NYU to work on a master’s degree after completing a B.S. in Electrical and Telecommunications Engineering at Universidad Iberoamericana, Mexico. He went on to finish a Master’s in Cybersecurity at Tandon in 2015. Deciding to stay on for a Ph.D. was largely driven by the “problem-solving orientation” of SSL, that allowed him to be “a project manager, plus a Ph.D. candidate, plus an open source expert” in one little corner of a much larger ecosystem. Being able to embrace all these roles was more appealing to him than “conforming to industry trends--making things just for the sake of making things.” His initial work with his advisor, Tandon associate professor [Justin Cappos](https://engineering.nyu.edu/faculty/justin-cappos) who directs SSL,focused on a strategy called [PolyPasswordHasher (PPH)](https://pph.io/PolyPasswordHasher/) that forces potential hackers to crack passwords in sets. This increases the attackers’ level of difficulty, making a PPH-enabled database very hard to breach, even for an adversary with millions of computers. Santiago later headed up development and integration of [in-toto](https://in-toto.io/), a framework to secure the software development supply chain by providing greater transparency and accountability. This framework is in various stages of adoption by a number of open source projects, including a recent [integration by DataDog](https://www.datadoghq.com/blog/engineering/secure-publication-of-datadog-agent-integrations-with-tuf-and-in-toto/) that also uses [The Update Framework (TUF)](https://theupdateframework.io/), another SSL project on which Santiago worked. He has presented his research at several top-level conferences, including [USENIX Security](https://ssl.engineering.nyu.edu/papers/torres-toto-usenix19.pdf) and [USENIX Symposium on Networked Systems Design and Implementation](https://ssl.engineering.nyu.edu/papers/kuppusamy_nsdi_16.pdf). - - - -Dan, on the other hand, initially entered the doctoral program at Tandon in 2014 to work with Assistant Professor Andy Nealen in the NYU Tandon [Game Innovation Lab](https://game.engineering.nyu.edu/), even though he admits game design was not a primary interest. He had completed a bachelor’s degree at Rutgers University in 2010 and spent four years in industry after that. Dan began working with Cappos at SSL because he wanted to work in software engineering, and Justin, “was looking for a student and had funding.” The work he ended up doing with the [Atoms of Confusion](https://atomsofconfusion.com/) project has opened up a new way to think about program comprehension and has several potential future applications in education and language design. Two of the four papers Dan submitted to conferences on this topic have garnered ACM SIGSOFT Distinguished Paper honors, one from the [2017 Foundations of Software Engineering conference](https://atomsofconfusion.com/papers/understanding-misunderstandings-fse-2017.pdf) and the other from the [2018 International Conference on Mining Software Repositories conference](https://atomsofconfusion.com/papers/atom-finder-msr-2018.pdf). Dan also continued his affiliation with the Game Innovation Lab by contributing to more than a dozen conference papers with students and faculty from that group. - -#### Surprises, Highlights and Low Lights - -The biggest surprise for Dan during his tenure at Tandon was perhaps not a pleasant one. “I had this image that I would come out of this with a lot of useful skills,” he noted, yet he adds, “all the jobs open to me now are ones I could have gotten with a bachelor’s degree... I had just assumed that there were Ph.D. graduates out there doing research in industry, but it seems that only the top 1% of the top 1%, the ‘knock it out of the park performers’ are getting those jobs.” - -Santiago also experienced something of a rude awakening, though ultimately one with more of a positive outcome. “I came in thinking I was expected to know everything right away, but I found they don’t just hand you the keys of the kingdom,” he observed. “There is a degree of ambition that comes into play, an acknowledgement to prove yourself, but there is no need to show off all you know at once.” He also cautions that working towards a Ph.D.”can be very emotionally challenging,”and so he learned that this side of life could not be ignored. For him, “working in this place where there is a community to reach out to” was a saving grace when things got tough. - -Both could also point to some “grace notes” from their tenure. Santiago observed that, though one of his earliest paper reviews for the PPH paper was pretty cruel, over time he found that “not all criticism is negative. I found that the community will validate as much as criticize.” He also re-discovered that he liked writing, which has its advantages when one decides to pursue a career in academia. And, the fact that his research area was “unchartered territory” meant that “in-toto evolved as I evolved as a researcher.” The entire process was as much a learning experience as it was finding a hands-on solution to a problem. - -The life of a Ph.D. student also offers a certain amount of flexibility and that can be a benefit in and of itself. “I managed to have a really good life over the last few years, and it was partly enabled by the Ph.D.,” Dan stated. “ My grandfather recently died. The Ph.D. program enabled me to spend time with him, something I otherwise might not have been able to do.” - -#### Words of Wisdom to Those who Remain - -Dan recommends “coming into the program with a specific research project in mind, preferably something you would work on even if it were on your own time.” Though he admits that, “I got pretty lucky with the assignment I received,” he still has some regrets that he was not able to pick his own research topic. - -He also advises that “dropping out is a viable option. I know tons of people who are happy they dropped out.” In short, if you are not happy, understand you can try a different path. - -Santiago, who accepted a tenure track position in Purdue University’s Electronics and Computer Engineering Department, is looking forward to “trying to reconcile the world I want to see with the skills I have.” His advice to his lab mates is rather simple. “Enjoy it more.” - -Professor Cappos added “Both Santiago and Dan have made tremendous contributions while part of SSL. We are all excited to see what great things they each achieve going forward! You will always have a home here.” diff --git a/_posts/2020-11-13-christian-gsoc.md b/_posts/2020-11-13-christian-gsoc.md deleted file mode 100644 index 177ae8b5..00000000 --- a/_posts/2020-11-13-christian-gsoc.md +++ /dev/null @@ -1,34 +0,0 @@ ---- -layout: article -title: Looking Back on a Summer of Code -subnav: blog -comments: true -tagline: "I have been active on Github since 2013 and although I have participated in various projects, such as serving on the security team for Arch Linux, I have never really contributed a large amount of code..." -author: 'Christian Rebischke' -categories: - - 'in-toto' - ---- - -*Note: For 15 years, the Google Summer of Code program has given student developers an opportunity to spend their summer break working with an open source organization on a three month programming project. This year, Christian Rebischke, a 27 years old Master’s student from Germany (Technical University of Clausthal), spent his Google Summer working on in-toto. Christian came to the in-toto project through his affiliation with the Cloud Native Computing Foundation (CNCF). In this post Christian shares a little of his experience with the Summer of Code program.* - -I have been active on Github since 2013 and although I have participated in various projects, such as serving on the security team for Arch Linux, I have never really contributed a large amount of code to these initiatives. Most of my contributions were smaller fixes or documentation enhancements. As a result, when I first saw the announcements for [Google Summer of Code](https://summerofcode.withgoogle.com/), I hesitated for a long time. However, a few people I met in the [Internet Relay Chat (IRC)](https://en.wikipedia.org/wiki/Internet_Relay_Chat) encouraged me to try it. I was completing my master’s program and so, realizing this might be my last chance, I decided to go all-in and applied to three projects: Prometheus, Flux and [in-toto](https://in-toto.io). I focused on projects inside of the CNCF because my dream job would be in the realm of Site Reliability Engineering. I knew little about in-toto except that it is related to the [Reproducible Builds](https://wiki.archlinux.org/index.php/Reproducible_Builds) efforts of Linux Distributions, because Arch Linux is going through the same process. But I never really had a close look at the project, so it was something of a surprise when in-toto was the project that chose me. - -As it turns out, the choice was an appropriate one for me. I have served as a package maintainer for several years now, and in that capacity, supply chain security has long been a major concern. As a package maintainer I try to ensure that all users can be certain they are using the package the project owners originally conceived. This only works with a secure supply chain, and providing that seems to be a big problem for many developers. Otherwise, why do so many projects lack standards, like signed tarballs? Or, even if a signed tarball exists, there are so many other key factors that ensure a supply chain is secure that one secure step doesn’t mean that the final product is what the owner originally envisioned. A secure supply chain begins with the first letter of code and ends with the deployment of the product on a target system. Managing all of these steps is indeed difficult and I can fully understand why developers have so many problems with it, especially if you have to deal with various artifacts on each step of your supply chain, like input and output of compilation or code verification. - -I started my Summer of Code journey in the CNCF in-toto Slack channel in May 2020. The first task I worked on was getting to know the specification and the main objective of my upcoming internship. I learned that in-toto was born as a research project in the [Secure Systems Lab](https:/ssl.engineering.nyu.edu/) of New York University, under the guidance of [Professor Justin Cappos](https://engineering.nyu.edu/faculty/justin-cappos) and that it focuses on securing the supply chain. The motivating idea is that every step in a software supply chain should be verifiable, starting with signed commits, through build servers and continuous integration or continuous deployment to the end user. in-toto solves this objective via defining links, each responsible for generating in-toto link data. These links are files in JSON format that represent a step in a software supply chain, and they testify which person or machine completed a particular step in a software supply chain. Each of these steps can be signed and later verified. My task has been to port this functionality from its Python implementation to the Go implementation. - -My contributions began with a few small pull requests to the Go implementation that cleaned up issues found by go-lint. I also took a look at existing pull requests that had stalled somewhere along the line. In June 2020, having already gotten more confident with the code base, I started with bigger adjustments. My primary tasks included signing generated link data via signature algorithms, as specified in the in-toto specification, and providing full support for RSA-PSS, ED25519 and ECDSA. - -Throughout this work I was mentored by four in-toto project members: [Lukas Pühringer](https:/ssl.engineering.nyu.edu/people#lukas_puhringer), [Trishank Karthik Kuppusamy](https://engineering.nyu.edu/alumni/trishank-kuppusamy), [Santiago Torres-Arias](https://www.cerias.purdue.edu/site/people/faculty/view/3153), and Professor Cappos. For communication we used the CNCF in-toto slack channel and Github's comment functionality in issues and pull requests. This worked pretty well for us, even under the circumstances that some of my mentors lived in another time zone. The time zone differences, though, actually became a benefit. Lukas was also in Europe, so I could talk with him early on, and later on I could ask my mentors in the US for direct feedback. This distributed the work equally and no mentor got too distracted by my questions. We had some great times hacking together on the pull requests. The feedback I received was always on point and I really enjoyed working with the community. I think that, in all my time contributing to open source projects, I have never experienced such a professional and friendly community. - -There were other positive takeaways from this experience. It has long been a dream of mine to contribute more than just a few lines of code to a project. The Google Summer of Code was my first successful try to deep-dive into a foreign code base and I feel I have definitely increased my skills in reading foreign code, getting familiar quickly with a code base, and communicating with project developers. I also found and submitted a fix for a bug in the Go crypto library that led to a nil pointer dereference and invalid memory access, and I attended my first [Kubecon](https://www.cncf.io/events/kubecon-cloudnativecon-europe-2020/). Though sadly the conference was online-only due to the Covid-19 outbreak, I still enjoyed every minute of it. I even found some other interesting projects to work on in the future, like [TUF (The Update Framework)](https://theupdateframework.io/). I have already submitted my first pull request to the TUF Go implementation and I plan to keep working on the in-toto Go implementation. - -In summary, I think Google Summer of Code brought me to a project that I think is important, interesting and challenging all at the same time. Moreover, I really think that this project brings me one step closer to my future career goal. The Go implementation and the proximity to the CNCF will definitely help me in increasing my Site Reliability Skills. The in-toto Go implementation has been the project I was always looking for, one that unites my personal highlights of open source: an awesome, friendly and helpful community, a do-able challenge that helps me to grow, and a project with a higher purpose. - -I do not want to finish without honouring my four mentors: Lukas Pühringer, Justin Cappos, Santiago Torres-Arias and Trishank Karthik Kuppusamy. They always reacted quickly when needed and they always gave me the right hints, when I had difficulties understanding the specification or the code base. -I could not be happier. - -*Editor’s note: When asked to comment on Christian’s work, Torres-Arias observed, “I can’t think of a more successful Google Summer of Code experience than Christian’s in the in-toto project. I believe this not only because of the raw contributions to the in-toto golang codebase, but also because it is a perfect example of the values that GSOC is working to foster: allowing a young open source enthusiast to integrate themselves with an open source project. As always, it is very rewarding to see a contributor take on small tasks, meet people, grow initiative and eventually become a esteemed member of the community for an indefinite amount of time.”* - - diff --git a/_posts/2021-01-25-intoto-release.md b/_posts/2021-01-25-intoto-release.md deleted file mode 100644 index 7d6d56a5..00000000 --- a/_posts/2021-01-25-intoto-release.md +++ /dev/null @@ -1,28 +0,0 @@ ---- -layout: article -title: In the Shadow of SolarWinds, in-toto Releases its First Major Version -subnav: blog -comments: true -tagline: "The recent SolarWinds hack ... is a sobering reminder that though updates are necessary, they are also always fraught with risk" -author: 'Lois Anne DeLong' -categories: - - 'in-toto' - ---- - -The recent [SolarWinds](https://en.wikipedia.org/wiki/SolarWinds ) hack in which companies, government agencies, and academic institutions suffered significant data breaches after malware was slipped into a software update, is a sobering reminder that though updates are necessary, they are also always fraught with risk. The [full impact](https://en.wikipedia.org/wiki/2020_United_States_federal_government_data_breach) of the attack, which is known to have affected computer systems within the U.S. Departments of Defense, State, Homeland Security, Treasury, Commerce, and Energy, is still to be tallied. - -Though attacks on software update systems are not a new phenomena, the introduction of what has been dubbed the [Sunburst virus](https://blog.malwarebytes.com/detections/backdoor-sunburst/) has demonstrated just how devastating the consequences can be if updates are corrupted with malware. Defending against future hacks of this nature requires a security system that can assure that all the steps performed on a piece of software throughout its design and development lifecycle were completed in the right way by the right people. - -As the fallout from SolarWinds was just coming to light, [in-toto](https://in-toto.io/), a free, easy-to-use framework that cryptographically ensures the integrity of the software supply chain, marked a significant milestone in its project history. On November 19, 2020, following five years of research and development, and adoption or integration into several major open source software projects, in-toto released its first major version (V.1.1.0). The release signifies that in-toto has reached a level of maturity where its developers can ensure its quality, and guarantee its security to potential adopters. - -Initiated in 2016 by [Prof. Santiago Torres-Arias](https://www.cerias.purdue.edu/site/people/faculty/view/3153) and [Prof. Justin Cappos](https://engineering.nyu.edu/faculty/justin-cappos) in the [Secure Systems Laboratory](https://ssl.engineering.nyu.edu/) at [NYU’s Tandon School of Engineering](https://engineering.nyu.edu/), in-toto provides transparency as to what steps are performed on a piece of software throughout its design and development lifecycle. This information is crucial to security as it addresses an inherent problem in software development processes: their decentralized nature. “As it moves from development to testing to packaging, and finally to distribution, a piece of software passes through a number of hands,” notes Torres-Arias, who leads the in-toto project and did his dissertation on the topic. He adds, “By requiring that each step in this chain conform to the layout specified by the developer, it confirms to the end-user that the product has not been altered for malicious purposes, such as by adding backdoors in the source code.” - -On a simple level, in-toto can be explained as follows: A project owner creates a layout describing the steps that every functionary—be it an individual or an automated entity— must perform, as well as the specific inspection steps that must be performed on the client's machine. After the step is completed, the functionary records link metadata about the action specifying what was done, when, and by who. Once all functionaries have completed their task, both the metadata and the files are aggregated into a final product. Lastly, when the end-user receives the product, he or she will perform a last verification to ensure all steps were performed correctly. -For [Dr. Trishank Kuppusamy](https://www.linkedin.com/in/trishank-karthik-kuppusamy/), a 2017 Ph.D. graduate of NYU who worked on the project in its early days, and is now Staff Security Engineer at one of the project’s adopters, [Datadog](https://www.datadoghq.com/), what separates in-toto from other security systems is that “it has been designed against a very strong threat model that includes nation-state attackers at the top. Together with its sibling project [The Update Framework (TUF)](https://theupdateframework.io/), it is the only system that I know of that offers end-to-end security anywhere between developers and end-users.” He adds that, “At Datadog, we chose to use TUF and in-toto to automatically yet securely [deliver](https://www.datadoghq.com/blog/engineering/secure-publication-of-datadog-agent-integrations-with-tuf-and-in-toto/) new versions of our Agent Integrations. As far as we know, this is the first publicly-discussed CI/CD pipeline in the industry that provides such end-to-end security.” As noted by BoxBoat on December 14, “DataDog’s implementation of in-toto in their pipelines would likely have stopped the SolarWinds attack dead in its tracks.” -in-toto has collaborated with open source communities such as Git, Docker, and OpenSUSE. It is also part of the [Cloud Native Application Bundle](https://cnab.io/) (CNAB), an open source project that facilitates the bundling, installing and managing of container-native applications. Ralph Squillace, Principal Program Manager for Microsoft Azure Computer's Application Platform team and a contributor to CNAB, noted that in-toto was picked for the specification's supply chain attestation approach in [v1.0](https://github.com/cnabio/cnab-spec/blob/master/300-CNAB-security.md) “precisely because it was open-source and applied precisely to the problems of supply chain confidence the community expects distributed applications to have in the real world.” He adds that, “there are many possible ways of handling the problem, but in-toto can be used anywhere and is developed in public by an engaged community. We hope to expand its usage and support it in our work going forward.” - -In addition to Prof. Torres-Arias, who graduated from Tandon in 2020 and is now an assistant professor of electrical and computer engineering at Purdue University, the in-toto research team includes developer Lukas Pühringer, Ph.D. student Aditya Sirish, and undergraduate students Yuanrui Chen, Isha Vipul Dave, Kristel Fung, Cindy Kim, and Benjamin Wu, all from the Secure Systems Laboratory at NYU; and doctoral students Hammad Afzali Nanize and Sangat Vaidya, together with [Professor Reza Curtmola](https://web.njit.edu/~crix/), who is co-director of the [Cybersecurity Research Center](http://centers.njit.edu/cybersecurity/) at New Jersey Institute of Technology. in-toto has also benefited from reviews and contributions from members of the open source community, who have not only provided critiques on design decisions, but who have also shared lessons learned from their own deployments of the framework. - -With the release of 1.0.0, both the research team and this growing user community look forward to the framework’s ability to reduce malicious interference in the software lifecycle. “The release of a stable in-toto 1.0.0 will hopefully encourage more software projects to start securing their supply chains yesterday rather than tomorrow,” Kuppusamy notes. “It is an important milestone because both the specification and the reference implementation have been tested in production for at least the past three years.” - diff --git a/_posts/2021-01-27-uptane-standard.md b/_posts/2021-01-27-uptane-standard.md deleted file mode 100644 index d9da3467..00000000 --- a/_posts/2021-01-27-uptane-standard.md +++ /dev/null @@ -1,67 +0,0 @@ ---- - -layout: article -title: "Uptane Releases V.1.1.0 of its Standard; Introduces Deployment Best Practices" -subnav: blog -comments: true -tagline: 'There is little doubt that cars have caught the attention of hackers, and little hope that these trends will be reversed.' -author: 'Lois Anne DeLong' -categories: - - 'Uptane' - ---- - -According to the [2020 Global Automotive Cybersecurity Report]((https://upstream.auto/upstream-security-global-automotive-cybersecurity-report-2020/), -released by UpStream Security in December 2020, cyber attacks on vehicles increased 99% from 2018 to 2019, and by 700% since 2016. There is little doubt -that cars have caught the attention of hackers, and little hope that these trends will be reversed. Therefore, there is a growing need for clear and precise security protections, which should ideally be based on lessons learned in practice in the automotive industry. - -In 2016, [Uptane](https://uptane.github.io/), an open-source software security project designed with direct input from automotive manufacturers and -suppliers, was introduced to address this threat. In a nutshell, Uptane implementations secure automotive systems by establishing a set of checks and balances on a vehicle’s electronic control units (ECUs) to ensure the authenticity of incoming software updates. -It is designed for “compromise-resilience,” or to limit the impact of a compromised software repository, an automotive insider attack, a leaked signing key, and similar attacks.. Uptane principles can be incorporated into most existing software update packages, but offers particular support for over-the-air (OTA) software distribution strategies. -Over the past four years, as the framework has been developed, tested and implemented, hacks against vehicles have gone from isolated incidents by -individuals to carefully coordinated assaults by governments and large-scale criminal enterprises. "Nation-state actors have increasingly attacked -software delivery mechanisms and the software supply chain,” states Dr. Justin Cappos, who is a founder of the Uptane project. “Without strong protections -like Uptane, these attacks against the automotive industry will be successful, leading to massive damage and loss of life.” - -Nevertheless, technology is useless or even dangerous if not implemented properly. Shortly after its official launch, leaders of the Uptane project and -representatives from the automotive industry began the process of standardizing the technology, releasing Version 1.0.0 of its Standard for Design and -Implementation under the auspices of [IEEE/ISTO](https://ieee-isto.org/) on July 31, 2019. In parallel, the group began establishing a set of best practices -for deploying Uptane to assure that the consistency of its security guarantees are preserved across different platforms and deployment situations. -Now that the Uptane framework has been widely adopted, including integration into [Automotive Grade Linux](https://www.automotivelinux.org/), -an open source system currently used by many large OEMs, and implementations by a number of tier 1 suppliers, including -[Airbiquity](https://www.airbiquity.com/blog-events/blog-posts/delivering-secure-automotive-ota-updates-uptane-and-otamatic) and -[HERE](https://www.here.com/company/press-releases/en/2019-28-05 ), the Uptane project announces the release of its first update to that Standard. -[Version.1.1.0](https://uptane.github.io/papers/uptane-standard.1.1.0.html) was officially released on January 8, 2021. While the new version does -not specify major changes to existing implementations, it clarifies procedures, establishes a style guide for consistency in spelling, capitalization, -and use of punctuation, and describes options for simplifying operations with no loss of security. - -The new version of the Standard is accompanied for the first time by a version-controlled *Deployment Best Practices* companion document. While most of -the information in this latter document has been available on the Uptane website for several years, this deployment guidance is now being released as a -stand-alone document at the request of auto manufacturers. As Patti Vacek, one of the software engineers behind the first open-source Uptane -implementation from Here Technologies, explains, “These best practices have been informed by years of experience of putting the Standard into action. -Our partners have found this information to be a useful extension to the Standard, especially when it comes to making important decisions about -trade-offs and real-world implementation details.” Along with his colleague, Jon Oster, Vacek serves on the Uptane Standards team, a group -representing a cross-section of auto manufacturers, tier 1 suppliers, and representatives of academia and government regulatory agencies. - -A significant emphasis of the review process was clarifying Uptane to fulfill proliferating regulations and international standards within the -automotive industry. Key cybersecurity and software update regulations from the United Nations Economic Commission for Europe (UNECE WP 29) become -law in the European Union, Japan, and Korea this month. The regulation will apply to all new vehicle models in 2022, and all existing vehicle models -in 2024. Closely related international standards work is also in progress on ISO/SAE 21434 Road Vehicles Cybersecurity Engineering, and -ISO 24089 Road Vehicles Software Update, and so all automotive OEMs will need to adapt the design of their automotive connected systems. -“The emergence of these comprehensive automotive cybersecurity standards and regulations offers a historic opportunity to dramatically improve the -cybersecurity and safety of new vehicles,” says security consultant Ira McDonald, who has been an invited expert in the Trusted Computing Group -since 2006, and is an Uptane Steering Committee member. - -During the review period the team also identified issues to be addressed in 2021. Marina Moore, a Ph.D. candidate at NYU Tandon who has been a -developer on Uptane for the past two years observes, “Through this process, we started discussions about breaking changes that will be needed in -future releases to ensure that the Uptane standard continues to evolve for additional industry use cases.” The Uptane Standards team plans a second -minor release in June 2021, and a larger revision, which will include more significant changes to the technology, in December 2021. - -Initially developed under a grant from the [U.S. Department of Homeland Security](https://engineering.nyu.edu/), Uptane represents contributions -from a team of engineers at New York University Tandon School of Engineering in Brooklyn, NY, the [University of Michigan Transportation Research Institute](https://umtri.umich.edu/) -in Ann Arbor, MI, and the [Southwest Research Institute](https://en.wikipedia.org/wiki/Southwest_Research_Institute) in Austin, TX. Uptane was -first announced in a paper written by Dr. Trishank Kuppusamy, Akan Brown, and Sebastien Awwad, Dr. Damon McCoy, and Cappos from NYU Tandon; -Cameron Mott from Southwest Research Institute, and Russ Bielawski, Sam Lauzon and Andre Weimerskirch from the University of Michigan, -and was presented at the 2016 Embedded Security in Cars Conference (Escar USA 2016). Over a four year year period, more than 30 individuals -and 17 organizations have provided input to the design, development and implementation of Uptane. A full list of contributors to the project can -be found [here](https://uptane.github.io/people.html) diff --git a/_posts/2021-06-29-threat-modeling.md b/_posts/2021-06-29-threat-modeling.md deleted file mode 100644 index 8aaaf99d..00000000 --- a/_posts/2021-06-29-threat-modeling.md +++ /dev/null @@ -1,27 +0,0 @@ ---- - -layout: article -title: "Design by Calvinball: Why it doesn’t work for secure system design" -subnav: blog -comments: true -author: 'Marina Moore' -categories: - - Informational - ---- - -The design of secure software systems can be a lot like Calvinball. Drawn from the comic strip Calvin and Hobbes by Bill Watterman, the little boy Calvin invents a sport named after himself in which the players make up the rules as they go. As a result, no two games are the same. Once a rule is created, it lasts the rest of the game. New rules can be created, but no existing ones can be removed. This results in very chaotic games, with players switching sides, singing songs, and literally moving the goalposts. - -The way software is designed and developed has a lot in common with Calvinball. Developers can choose to start with a threat model, or to focus first on usability and add security features later. However, in the latter case, once initial decisions have been made, it may be impossible to reverse them in order to embrace a holistic approach to security. Thus the developers will be forced to either leave out important security guarantees, or re-design the software in a way that will likely not be backwards compatible with the initial, insecure design. Instead, starting with a clear and thorough threat model makes it easier to ensure that the most important risks are accounted for from the beginning. - -The need to identify and prioritize those “most important risks” is another reason for threat modeling during the design stage. When designing secure software systems, it is tempting to want a system that is impervious to all attacks. However, no one has yet achieved un-hackable software. Unfortunately, this desire can create a Calvinball-like hodge-podge of security layers. Instead, designers should borrow a page from the manufacturers of physical safes. The safes in banks and other facilities designed to hold cash, bonds, gold, and other valuables are rated for how many minutes they can withstand fire or another type of assault. In cybersecurity, it is a given that state-of-the-art cryptography can be broken by an attacker with unlimited resources, as cryptographic algorithms assume that an attacker is limited by the processing speed of modern computers. Instead of aiming for perfect security, we instead ask questions like “Who am I protecting against?”, “What resources do I expect an attacker to have?”, and “What is the most sensitive data in my system?” so that our architecture can systematically mitigate those risks. We ask these questions through the process of threat modeling. - -So what is a threat model? According to [OWASP](https://owasp.org/www-community/Threat_Modeling), threat modeling “works to identify, communicate, and understand threats and mitigations within the context of protecting something of value”. This process allows the designers of security systems to state upfront in the design process what attacks the system is designed to protect against, and what is out of scope for a particular project. Software engineers then use the threat model to develop defensive strategies against the enumerated attacks. When reviewing code, they can test the safeguards implemented to prevent these attacks. - -In software design by Calvinball, defining and prioritizing security strategies has to be the first rule written, so that no future rule violates the security properties on which users of the software rely. Threat modelling should be the critical first step in the process of creating secure software. Software security systems that do not specify what they are securing against are far less likely to successfully mitigate more likely or impactful attacks. There is good reason for this. Without a threat model, system designers can be tempted to take shortcuts or fail to think of a particular attack scenario. The programmers will have no way to ensure that their system prevents the highest priority attacks. Without insight into the expected resources of an attacker, it is difficult to make good decisions about cryptographic algorithms, protocols, or policies. - -Security systems without a threat model can be labeled “designed by Calvinball.” Strategies that evolve on the fly without clearly defined security goals and priorities may resolve issues on a piecemeal level, but also create an ever thickening matrix of modifications and restrictions that can fail to mitigate the most serious attacks. In fact, developers that do not consider security threats early in the design process can implement a system that simply cannot mitigate certain attacks without redesigning the entire system. For example, if key compromise is not considered early in the design process, the system may rely on trusted keys with no way to ensure that these keys are current and uncompromised. - -To conclude, I’d like to challenge the reader to try to think of well-designed security systems that did not start with a well-defined threat model. I’m curious if anyone has an example of such a system, and can provide insight on how that system overcame inaccurate assumptions about attacker capabilities. - -Calvinball could be a great way to spend an afternoon with friends on a beautiful summer day. Using it as a model for designing software for use in the real world could be “game over.” diff --git a/_posts/2021-07-26-signature-verification.md b/_posts/2021-07-26-signature-verification.md deleted file mode 100644 index a9ffd3a9..00000000 --- a/_posts/2021-07-26-signature-verification.md +++ /dev/null @@ -1,20 +0,0 @@ ---- - -layout: article -title: "Santa's signatures part 1 -- Did the Grinch intercept your Christmas card?: The importance of signature verification" -subnav: blog -comments: true -tagline: 'Why cryptographic signatures are only useful when paired with verification.' -author: 'Marina Moore' -categories: - - Informational - ---- - -Recently, there has been greater awareness of the importance of using cryptographic signatures as a protective measure when distributing software and metadata. This is a big win for the overall security of software ecosystems, but on their own, signatures do not enhance software security. Anyone can sign a holiday card as “Santa Claus,” but if a kid wants to be sure that the card has not been altered since Santa signed it, more evidence is needed. - -In this series of posts, I want to spend some time talking about the importance of signature verification and understanding what is and isn’t achieved by validating a signature. To start, let’s define what we mean by a cryptographic signature. A signature is a bit of cryptography attached to an asset that allows whoever controls a given [private key](https://en.wikipedia.org/wiki/Public-key_cryptography) to attest to the validity of that asset. Anyone with a computer can generate a signature for an asset using their own private key. The benefit of signatures is they provide assurance of the veracity of the data associated with them. To verify a signature, another entity can use a public key associated with the private key to ensure that the signature was made by a particular trusted party, and that the data associated with the signature was not altered in transit. - -Using a cryptographic signature, a kid can ensure that after Santa wrote and signed your Christmas card, the Grinch didn’t change a few words in the middle. The signature allows for assurance of the integrity of the message from when Santa signed it in the North Pole to when a kid opened it in the living room on Christmas Day. - -[Go to part 2: Who is signing your Christmas card?: Establishing trust](https://ssl.engineering.nyu.edu/blog/2021-07-27-signature-trust) diff --git a/_posts/2021-07-27-signature-trust.md b/_posts/2021-07-27-signature-trust.md deleted file mode 100644 index aa928153..00000000 --- a/_posts/2021-07-27-signature-trust.md +++ /dev/null @@ -1,33 +0,0 @@ ---- - -layout: article -title: "Santa's signatures part 2 -- Who is signing your Christmas card?: Establishing trust" -subnav: blog -comments: true -tagline: '' -author: 'Marina Moore' -categories: - - Informational - ---- - -In the previous post, we established that a kid can verify a card has not been changed since Santa signed it. The kid now wants to ensure that it was actually Santa or one of his trusted elves who signed the card. In this post, we ask how the kid can ensure that a signature came from a trusted holiday emissary, and not from the Grinch. - - - - -For cryptographic signatures, the source of truth lies in a verification strategy using a trusted public key infrastructure (PKI) system. I won’t go into all of the details of PKI systems here, though for an example of how to distribute keys you should check out [The Update Framework (TUF)](https://theupdateframework.io). But, basically, the PKI system is like Santa’s driver’s licence. It is a trusted source that affirms what Santa’s signature should look like on your Christmas card. Thus, it’s not the signature alone that assures little Timmy or Jane that Santa left them a card, but also the evidence that some authority—in this case the North Pole DMV—affirmed what Santa’s signature, and those of his elves, look like. - -But what if a fake license was substituted for the real one at some point? To check if such a substitution is possible, we would need some key contextual information. In verifying signatures, that might include knowing who owns the private key used. For example, a software updater may send a signed package, along with the public key necessary to verify that package all in the same protocol. Yet, if an attacker is able to interrupt network traffic or gain access to the software repository, that attacker can then change both the package and the key used to sign the package. It would be the equivalent of the Grinch signing a letter with Santa’s name, and attaching a fake driver’s licence with a matching signature. This leaves the recipient no better off than when they download unsigned data. - - - - -In order to ensure that software is not only signed, but valid, you must ensure that you are using a public key that was communicated over a secure channel, thus keeping out any interfering Grinches. For cryptographic signatures, a secure channel can be a trusted PKI system such as TUF, or an offline mechanism. These mechanisms act like government issued ids and allow you to verify the identity of an individual. By ensuring that the public keys are communicated securely, you can ensure that when they are used to verify data, that data actually came from a trusted signer. - -In this post, we established how to obtain a collection of trusted keys, but on it’s own this isn’t sufficient to build a secure signature system. In future posts, I will focus on other important considerations when using cryptographic signatures, including ensuring that signatures are valid at the time they are verified, ensuring that the revocation of keys or signatures is communicated to the verifier, and effectively communicating signing algorithms. - -[Go to part 3: Ensuring the Easter Bunny isn’t signing your Christmas cards: Applying limitations of trust](https://ssl.engineering.nyu.edu/blog/2021-07-28-signature-namespaces) - -Previous posts in this series: -* [Part 1: Did the Grinch intercept your Christmas card?: The importance of signature verification](https://ssl.engineering.nyu.edu/blog/2021-07-26-signature-verification) diff --git a/_posts/2021-07-28-signature-namespaces.md b/_posts/2021-07-28-signature-namespaces.md deleted file mode 100644 index 8ad0ea8a..00000000 --- a/_posts/2021-07-28-signature-namespaces.md +++ /dev/null @@ -1,28 +0,0 @@ ---- - -layout: article -title: "Santa's signatures part 3 -- Ensuring the Easter Bunny isn’t signing your Christmas cards: Applying limitations of trust" -subnav: blog -comments: true -tagline: '' -author: 'Marina Moore' -categories: - - Informational - ---- - -In parts 1 and 2, we learned how to verify a signature from Santa, and to ensure that this signature actually came from a trusted party. However, the kid trusts a lot of different holiday icons, so how do they place limitations on who they trust to send cards for each holiday. Maybe for a birthday, a child could get cards from any holiday character, but the kid wants to ensure that the Easter Bunny isn’t overstepping and signing Christmas cards as well. - -One initial solution here is to maintain a mapping of every holiday to the entity trusted to sign cards for that holiday. However, there are a lot of holidays to keep track of, and each icon may replace or revoke keys, which would need to be reflected in the mapping. Further, Santa likes to delegate a lot of his Christmas card signing to his elves (which we will talk more about in a future post), so he wants to make sure that the elves’ keys are trusted by anyone who trusts his signature. - -Communicating which individual keys are eligible to sign packages can be a time- and labor-intensive process. Namespacing is an effective way to manage the issue of who can sign for what. This mechanism, which is included by default in The Update Framework (TUF), can be used to define which key is trusted for each piece of software. To do so, you start by securely communicating a set of public keys and the subset of software packages that each key is trusted to sign (the namespace). - -Not all keys should be trusted for every piece of software. A valid signature for Santa Claus should not be used to validate a Halloween card. A Debian developer trusted for maintaining the documentation shouldn’t need to be trusted to sign the kernel (though this is what happens today). This means that you not only need to know ahead of time what keys you trust, but also which of these keys should be used to verify specific packages. - -Now, our skeptical child can not only ensure that his Christmas card is authentic, they can make the same claim of every holiday card throughout the year. However, they still need to securely receive each trusted key. In the next part, we will discuss how to simplify key distribution through the use of delegations. - -[Go to part 4: Saving Santa some signing: Using delegations to distribute data](https://ssl.engineering.nyu.edu/blog/2021-08-12-signature-delegations) - -Previous posts in this series: -* [Part 1: Did the Grinch intercept your Christmas card?: The importance of signature verification](https://ssl.engineering.nyu.edu/blog/2021-07-26-signature-verification) -* [Part 2: Who is signing your Christmas card?: Establishing trust](https://ssl.engineering.nyu.edu/blog/2021-07-27-signature-trust) diff --git a/_posts/2021-08-12-signature-delegations.md b/_posts/2021-08-12-signature-delegations.md deleted file mode 100644 index 83259eb3..00000000 --- a/_posts/2021-08-12-signature-delegations.md +++ /dev/null @@ -1,62 +0,0 @@ ---- - -layout: article -title: "Santa's signatures part 4 -- Saving Santa some signing: Using delegations to distribute data" -subnav: blog -comments: true -tagline: '' -author: 'Marina Moore' -categories: - - Informational - ---- - -In parts 1-3, we discussed how very bright children can use signatures, verification, and namespacing to ensure that their Christmas cards actually came from Santa. But Santa is busy managing his Christmas empire, and so he needs to have his elves sign Christmas cards on his behalf. He could create a stamp of his signature and distribute this to his elves, but if any of the many copies of this stamp are lost or stolen, anyone in possession of it would have the full signing authority of Santa. So he needs a way to indicate that his elves are signing on his behalf without actually giving up control over his signature. Even a magical holiday icon cannot distribute every elf’s driver’s licence to every child. So what he needs is a simple way to indicate when he is passing his signing responsibility on to an elf. - -Santa can achieve this through the use of delegations. As the delegator, Santa passes some of his responsibilities to another party (the delegatee), which in this case is an elf. In terms of package signing, a delegation is a statement that serves as proof that someone else has been authorized to sign. When using cryptographic signatures, a delegation would include information about the public key of the delegatee and the scope of their trust, signed by the delegator. A user that trusted the delegator can then verify the delegation using a trusted public key, they can then use the public key indicated in the delegation to verify the package. - -Delegations may be revoked at any time by the delegator by replacing the delegation. So if an elf intern works in Santa’s workshop for a winter, Santa can delegate a set of cards to them when they join. At the end of the winter intern season, Santa can then release a new set of delegations that excludes the intern. Anyone verifying signatures after this new delegation is created will no longer trust the intern to sign on behalf of Santa. - -For example, a delegation may contain the following information, wrapped in a cryptographic signature by a trusted party. (adapted from the TUF specification): - - -``` -"delegations": { - "keys": { - "f761033eb880143c52358d941d987ca5577675090e2215e856ba0099bc0ce4f6": { - "keytype": "ed25519", - "scheme": "ed25519", - "keyval": { - "public": "b6e40fb71a6041212a3d84331336ecaa1f48a0c523f80ccc762a034c727606fa" - } - } - }, -"roles": [ - { - "keyids": [ - "f761033eb880143c52358d941d987ca5577675090e2215e856ba0099bc0ce4f6" - ], - "name": "project", - "paths": [ - "project/file3.txt" - ] - } - ] -``` - -This metadata indicates that the public key starting with “b6e4” is trusted to sign “project/file3.txt.” - -Namespaces separate a collection of items by name so that one can delegate authority for some, but not all items. In the above example, the delegator took advantage of namespaces to delegate authority only for “project/file3.txt,” and not for other files in the project. As another example, a Linux developer could delegate the “ls” utility to a developer without giving this individual permission to sign “chmod”. - -Santa can take advantage of namespaces so that a rogue elf has limited ability to forge Santa’s signature. He can designate an elf for each type of toy that will accompany the Christmas cards, and that elf is only allowed to sign cards for children who receive that type of toy. This combination of delegations and namespacing allows for fine-grained control over who is trusted to sign and what they are trusted to sign. - -Namespacing provides some protection in the event of a stolen cryptographic key, as the delegatee’s key is limited in scope. For example, a particularly careless elf might lose a signing key going home from the North Pole bar. If the key is picked up by the Grinch, he would only be able to use it for children delegated to that elf. Further, once the sheepish elf reports the stolen key to Santa, Santa could re-issue a delegation that does not include the stolen key. - -Delegations do not have to stop at one level. Say the elf responsible for vehicle-related toy Christmas cards finds herself in trouble this year as there are too many cards for her alone to sign. She decides to further delegate some of these cards through the North Pole bureaucracy. To do so, she signs a new delegation that includes four of her direct reports who will split up the card signing. She uses namespacing to give each of them control over one type of toy vehicle (trains, cars, planes, or boats) and signs this delegation with her own signature. She distributes her signed delegation alongside each of the cards, so that the addressed child can verify the chain of delegations. - -Now, Santa can safely delegate Christmas card signing to his elves without sharing his signing key or giving away too much power. And, a child verifying their Christmas card can start with a trusted key for Santa, use this to verify a delegation to the responsible elf, check for further delegations, and then compare the signature on their Christmas card to that elf’s signature. - -Previous posts in this series: -* [Part 1: Did the Grinch intercept your Christmas card?: The importance of signature verification](https://ssl.engineering.nyu.edu/blog/2021-07-26-signature-verification) -* [Part 2: Who is signing your Christmas card?: Establishing trust](https://ssl.engineering.nyu.edu/blog/2021-07-27-signature-trust) -* [Part 3: Ensuring the Easter Bunny isn’t signing your Christmas cards: Applying limitations of trust](https://ssl.engineering.nyu.edu/blog/2021-07-28-signature-namespaces) diff --git a/_posts/2021-09-07-uptane-summer.md b/_posts/2021-09-07-uptane-summer.md deleted file mode 100644 index 936dcbba..00000000 --- a/_posts/2021-09-07-uptane-summer.md +++ /dev/null @@ -1,27 +0,0 @@ ---- - -layout: article -title: "Uptane marks a pair of firsts" -subnav: blog -comments: true -tagline: 'The summer of 2021 was anything but slow for the Uptane project. It not only issued its second minor version of the *Uptane Standard for Design and Implementation,* but also published its first whitepaper and announced its first international virtual workshop.' -author: 'Lois Anne DeLong' -categories: - - 'Uptane' - ---- - -The summer of 2021 was anything but slow for the Uptane project. It not only issued its second minor version of the *Uptane Standard for Design and Implementation,* but also published its first whitepaper and announced its first international virtual workshop. - -The whitepaper, entitled [*Uptane: Securing delivery of software updates for ground vehicles*](https://uptane.github.io/papers/uptane_first_whitepaper_7821.pdf) starts with an explanation of the growing vulnerability of the computing units in cars and why security strategies developed for conventional systems may not be able to defend them. It urges manufacturers to take a realistic approach to cybersecurity, one that recognizes that it’s not a question of *if* an attack may occur but *when.* This mindset is the governing idea behind compromise resilience, a defensive strategy that aims to minimize the damage should an attack occur. As the whitepaper emphasizes, a design built for compromise resilience—an element that sets Uptane apart from most other automotive cybersecurity systems— will not disintegrate if a hacker obtains control of a repository or a signing key. In addition, compromise resilient systems like Uptane have built-in mechanisms to make a quicker recovery from such attacks. - -To ensure that Uptane is also on the industry’s radar on a global level, the group is partnering with escar Europe, the world's leading automotive cyber security conference, to offer its first international virtual workshop. The free workshop, which will be held online from 1 p.m.to 4:30 p.m. in Germany (7 a.m. to 10:30 a.m New York time, 8:00 p.m. to 11:30 Tokyo time). Note that you can register for the free workshop even if you are not attending the escar conference in November. One registration entitles you to attend both sessions. - -The workshop is offered in two parts. - -Part 1, hosted by Ira McDonald of High North, Inc. and Marina Moore of NYUs Tandon School of Engineering, presents an overview of Uptane’s design and the threats it is equipped to defend against. It also explains how its emphasis on compromise-resilience—or the ability to limit the damage from any potential compromise—makes it a realistic solution at a time when the rise of organized criminal enterprises and nation state attackers has greatly increased the potential consequences of such attacks, in terms of both economic and human costs. - -Part 2, hosted by André Weimerskirch of Lear Corporation, and Patti Vacek of unu Motors, -is designed for those who may already have some familiarity with Uptane and are interested in learning more from companies and organizations that have implemented the framework. The presentation will focus on examples/case studies, as well as recent or emerging challenges, such as supply chain security for automotive software updates, that the framework is adapting to meet. - -[Registration](https://www.escar.info/escar-europe/registration.html) is open now through escar Europe. Details on how to access the workshop will be provided a bit later in the month. diff --git a/_posts/2022-02-21-tuf-1_0_0.md b/_posts/2022-02-21-tuf-1_0_0.md deleted file mode 100644 index 2f3d35cb..00000000 --- a/_posts/2022-02-21-tuf-1_0_0.md +++ /dev/null @@ -1,55 +0,0 @@ ---- - -layout: article -title: "Python-TUF reaches version 1.0.0" -subnav: blog -comments: true -tagline: "The Python-TUF community is proud to announce the release of Python-TUF 1.0.0" -author: " Jussi Kukkonen and Lukas Pühringer" -categories: - - "TUF" - ---- - - - -The Python-TUF community is proud to announce the release of Python-TUF 1.0.0. -The release, which is available on [PyPI](https://pypi.org/project/tuf/) and -[GitHub](https://github.com/theupdateframework/python-tuf/), introduces new -stable and more ergonomic APIs. - -Python-TUF is the reference implementation of [The Update -Framework](https://theupdateframework.io/) specification, an open source -framework for securing content delivery and updates. It protects against -various types of supply chain attacks and provides resilience to compromise. - -For the past 7 releases the project has introduced new designs and -implementations, which have gradually formed two new stable APIs: -- [`ngclient`](https://theupdateframework.readthedocs.io/en/latest/api/tuf.ngclient.html): - A client API that offers a robust internal design providing implementation - safety and flexibility to application developers. -- [`Metadata API`](https://theupdateframework.readthedocs.io/en/latest/api/tuf.api.html): - A low-level interface for both consuming and creating TUF metadata. Metadata - API is a flexible and easy-to-use building block for any higher level tool or - library. - -Python-TUF 1.0.0 is the result of a comprehensive rewrite of the project, -removing several hard to maintain modules and replacing them with safer and -easier to use APIs: -- The project was reduced from 4700 lines of hard to maintain code to 1400 - lines of modern, maintainable code -- The implementation details are now easier to reason about, which should - accelerate future improvements on the project -- Metadata API provides a solid base to build other tools on top of – as proven - by the ngclient implementation and the [repository code - examples](https://github.com/theupdateframework/python-tuf/tree/develop/examples/repo_example) -- Both new APIs are highly extensible and allow application developers to - include custom network stacks, file storage systems or public-key - cryptography algorithms, while providing easy-to-use default implementations - -With this foundation laid, Python-TUF developers are currently planning next -steps. At the very least, you can expect improved repository side tooling, but -we're also open to new ideas. Pop in to -[#tuf](https://cloud-native.slack.com/archives/C8NMD3QJ3) on CNCF Slack or -[Github issues](https://github.com/theupdateframework/python-tuf/issues/new) -and let’s talk. diff --git a/_posts/2022-03-28-intoto-incubator.md b/_posts/2022-03-28-intoto-incubator.md deleted file mode 100644 index 2095c820..00000000 --- a/_posts/2022-03-28-intoto-incubator.md +++ /dev/null @@ -1,25 +0,0 @@ ---- -layout: article -title: in-toto moves to the CNCF Incubator -subnav: blog -comments: true -tagline: "The in-toto project, a supply chain security solution which provides protection by collecting and verifying relevant data at each step of a software product’s lifecycle, was recently promoted to the incubator of the Cloud Native Computing Foundation." -author: 'Lois Anne DeLong' -categories: - - 'in-toto' - ---- - -The in-toto project, a supply chain security solution which provides protection by collecting and verifying relevant data at each step of a software product’s lifecycle, was recently promoted to the incubator of the [Cloud Native Computing Foundation](https://www.cncf.io/). CNCF, a [Linux Foundation](https://www.linuxfoundation.org/ )-supported program designed to “assist the growth and development of promising new open source technologies applicable to cloud applications,” announced the promotion in a [press release](https://www.cncf.io/blog/2022/03/10/supply-chain-security-project-in-toto-moves-to-the-cncf-incubator/) issued March 10, 2022. - -“Born” in the [Secure Systems Laboratory](https://ssl.engineering.nyu.edu/) at NYU’s Tandon School of Engineering in 2015, under the guidance of lab director Dr. Justin Cappos, the move to the CNCF “incubator” is an indication of in-toto’s growing maturity. It marks fulfillment of a number of criteria, including adoption by other projects and active participation from multiple organizations. Incubating projects must also adopt the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md) and achieve and maintain the [Core Infrastructure Initiative Best Practices Badge](https://bestpractices.coreinfrastructure.org/en). - -"I am very excited to see in-toto grow into CNCF incubation. Not only because of what it means for the project, but for all the doors that it opens for new contributors, synergies with other CNCF projects and the ability to tackle new and open questions with regards to supply chain security, in the cloud or otherwise,” states [Dr. Santiago Torres Arias](https://www.cerias.purdue.edu/site/people/faculty/view/3153), an assistant professor at Purdue University and a lead developer on in-toto while completing his doctorate at New York University. “On a personal level, I can't overstate the uniqueness of in-toto's case, for it is not only an open source project, but one of the few that come from the academic world into the broader public with fresh ideas and a bold proposition to solve the problem at an ecosystem level. I can't wait to see what's to come for in-toto in the coming years." - -The CNCF promotion, and the increased visibility in the open source world that comes with it, arrives at a time when the need for reliable software supply chain security has never been greater. Chris Aniszczyk, CNCF chief technical officer, is quoted in a press release from the Foundation acknowledging this point. “We’re excited to have a project offering innovation in the supply chain security space,” he says, adding, “We look forward to seeing collaboration among the community to continue to make the cloud native ecosystem more secure.” Justin Cormack, who served as a CNCF project sponsor, concurs, noting, “in-toto …provides secure and trustworthy ways to represent and attest all the operations within the cloud native pipeline.” - -in-toto works as follows: For every piece of software it protects, it provides a layout that defines for each individual step what actions are to be taken and by who. This data is captured in metadata, as are all the artifacts involved. The designated functionary at each step also affixes a cryptographic signature on the metadata. When the end-user receives the finished product, he or she has a complete record of the product’s journey, and can verify if the software was created according to the designer’s original plans. If there is any divergence from the original layout, a user can pinpoint where the divergence occurred and who is responsible for it. - -The in-toto development team also includes NYU Tandon alumnus [Dr. Trishank Karthik Kuppusamy](https://www.linkedin.com/in/trishank-karthik-kuppusamy), now Engineering Manager at Datadog; developer Lukas Pühringer, and current Ph.D. candidate [Aditya Sirish A Yelgundhalli](https://engineering.nyu.edu/student/aditya-sirish-yelgundhalli), all from the Secure Systems Laboratory at NYU Tandon; and Hammad Afzali Nanize, Anil Kumar Ammul, Sangat Vaidya, and Professor, and co-director of the Cybersecurity Research Center [Reza Curtmola](https://web.njit.edu/~crix/), all from the New Jersey Institute of Technology. The project has participated in various initiatives that have attracted other contributors, such as Christian Rebischke of Arch Linux and Qijia “Joy” Liu, a student at the University of Pennsylvania, through Google Summer of Code (GSoC), and several undergraduate students—Alan Chung Ma of Purdue University; Yuanrui Chen, Isha Vipul Dave, Kristel Fung, Cindy Kim, and Benjamin Wu, all from NYU—through various research programs at both universities. Finally, due to in-toto’s relevance and impact in the industry, it has received contributions from employees at various companies through their open source contribution teams. Some significant contributors from this group are Mark Lodato, Tom Hennen, and Sergio Felix of Google, and Joshua Lock, Jussi Kukkonen, Martin Vrachev, and Teodora Sechkova of VMWare. - -Since its inception, in-toto has been adopted or integrated into a number of major open source software projects, including several within the CNCF and the [Open Source Security Foundation](https://openssf.org/), and in [Grafeas](https://grafeas), [Kubesec](https://kubesec.io/), [rebuilderd](https://rebuilderd.com), and [Sigstore’s cosign](https://github.com/sigstore/cosign/blob/main/specs/COSIGN_PREDICATE_SPEC.md). It has been implemented in different languages like Python, Golang, Java, and Rust, and is part of crucial security projects, such as [Reproducible Builds](https://reproducible-builds.org/) and [SLSA](https://www.slsa.dev/). The project has been adopted in production by [Datadog](https://www.datadoghq.com/), which has used it to secure its pipelines since 2019, and [SolarWinds](https://static.sched.com/hosted_files/supplychainsecurityconna21/df/SupplyChainCon-TrevorRosen-Keynote.pdf), who redesigned their build pipelines after the SUNBURST attack came to light. In its three years under the umbrella of the CNCF, in-toto has attracted more than 132 contributors from 16 plus different organizations. diff --git a/_posts/2022-03-29-uptane-v2.md b/_posts/2022-03-29-uptane-v2.md deleted file mode 100644 index 5eda5102..00000000 --- a/_posts/2022-03-29-uptane-v2.md +++ /dev/null @@ -1,31 +0,0 @@ ---- - -layout: article -title: "Uptane V.2.0.0: Open source standard for securing automotive computing units releases new version" -subnav: blog -comments: true -tagline: "On March 18, the Uptane project, an open community effort to secure and protect software delivered over-the-air to automobiles, announced the release of *Uptane V.2.0.0 Standard for Design and Implementation*" -author: 'Lois Anne DeLong' -categories: - - 'Uptane' - ---- - -On March 18, the [Uptane](https://uptane.github.io/) project, an open community effort to secure and protect software delivered over-the-air to automobiles, announced the release of [*Uptane V.2.0.0 Standard for Design and Implementation*](https://uptane.github.io/papers/uptane-standard.2.0.0.html). This new edition of the Uptane Standard and the companion reference document *Deployment Best Practices* reflect the project’s evolution towards greater adaptability to the needs of legacy systems and the emerging threats of sophisticated and persistent attackers. - -In the new Standards volume, the Uptane project mandates a few key added actions — such as improving the process for verifying the authenticity of an image before downloading — while allowing more flexibility in implementations than in previous releases. An example of this latter change was the decision to remove references to the original Uptane-specific time server, instead letting implementers make their own decisions about secure sources of reliable time. - -The changes in Uptane V.2.0.0 fall into three categories: design changes, to improve security; language changes, to continue an ongoing commitment to clarity and simplicity; and policy/administrative changes, to bring the Uptane project in line with best practices in Standards development. The administrative changes, which are also intended to help the Uptane project preserve architectural integrity, include the adoption of a formal policy for approving major and minor releases. This new edition of Uptane also reflects the adoption of a style guide to ensure consistency in spelling, capitalization, and the use of punctuation. - -As is customary in major releases, there are a few clarifications in Uptane V.2.0.0 worth noting. None of these clarifications significantly change the code base of existing Uptane implementations, so they should not cause compatibility issues. In addition to removing the requirement for use of the Uptane-specific time server and adding a requirement for an enhanced verification process, these Uptane V.2.0.0 changes also include: - -- recommending that filenames of images be encoded to prevent a path traversal on the client system. -- requiring monitoring the download speed of image metadata and image binaries to detect and defend against a slow retrieval attack. -- requiring that a vehicle identifier be used when Targets metadata from the Director repository includes no images, to prevent replay attacks. - -In terms of language changes, the Uptane Standard now rigorously restricts the use of conformance imperatives — words such as SHALL or MUST that have specific meaning when used in standards — to the cases where they are actually required for interoperation or limiting behavior with the potential for causing harm. Uptane V.2.0.0 also clarifies the functional properties of cryptographic keys, so that signing keys (which must be unique) are not confused with encryption keys (which can be shared-use keys). Uptane V.2.0.0 also clarifies that all primary ECUs always perform full verification on downloaded software update packages. - -*Uptane Standard for Design and Implementation* is available for download in HTML and PDF formats through the [Uptane website](https://uptane.github.io/). The companion volume, *Uptane Deployment Best Practices* will be available for download from the website in the next few weeks. - - -Uptane was developed by a team of engineers that included Dr. Justin Cappos, associate professor of computer science and engineering at NYU Tandon School of Engineering and director of its Secure Systems Lab. Dr. Cappos remains an active contributor to the project, serving as a member of the project’s steering committee. The lab also continues to contribute to the project’s development through the work of Ph.D. candidate Marina Moore, and alumni like Dr. Trishank Karthik Kuppusamy, now engineering manager at Datadog. Uptane is a [Joint Development Foundation](https://www.jointdevelopment.org/) project of the [Linux Foundation](https://www.linuxfoundation.org/). diff --git a/_posts/2022-09-02-preston-farewell.md b/_posts/2022-09-02-preston-farewell.md deleted file mode 100644 index 86a40dcf..00000000 --- a/_posts/2022-09-02-preston-farewell.md +++ /dev/null @@ -1,34 +0,0 @@ ---- - -layout: article -title: "PKM Has Left the Building: Farewell Thoughts from 2022 Ph.D. Grad Preston Moore" -subnav: blog -comments: true -tagline: "" -author: 'Lois Anne DeLong' -categories: - - 'CrashSimulator' - ---- - - - -[Preston Kent Moore](https://www.linkedin.com/in/preston-moore-ab8b9499/), a native of Eastern Tennessee, who came to NYU Tandon because he “wanted to build real tools and systems” is the Secure Systems Lab’s newest Ph.D. He officially became Dr. Moore on May 16th at Brooklyn’s Barclay Center. In the space between his arrival in the fall of 2015 and his recent departure, Preston built a tool, wrote a new language, came up with a new approach to identify environmental bugs, won a best paper award, and used Pokemon and card tricks to explain and sell his research ideas. - -Shortly before embarking on his newest adventure as a Senior Software Engineer at [Anaconda](https://www.anaconda.com), Preston shared some thoughts on how his journey to “improve the reliability of open source software” led him to leave a full-time lecturer post at [East Tennessee State University](https://www.etsu.edu/ehome/) to pursue a Ph.D. at Tandon. Actually, his teaching position was a motivating factor in that decision, as he was told he would need a Ph.D. to continue to work in academia. Once he knew additional schooling was needed, he chose NYU because he appreciated its “real-world” focus. “I wanted to build real tools and systems,” he noted, adding that he appreciated the hands-on approach he found at the school. - -#### Deep Waters and Stormy SEAs - -Preston confesses, though, that making the most of such an approach was not as easy as he initially thought. From early on, “I realized I was stepping into deep waters. In my previous academic work, everything had clean interfaces and layered designs. The real world has fuzzier interfaces.” For a number of years, those fuzzy interfaces were tied to a bug detection tool called [CrashSimulator](https://ssl.engineering.nyu.edu/projects#crashsimulator). It took several changes in approach, significant iterations on design, and a few unsuccessful paper submissions, before he was able to introduce and demonstrate the effectiveness of this tool. “CrashSimulator had a lot of low-level system details that I was not prepared for,” he explained, including being able to work with the types of “weird hacks” and jerry-rigged workarounds that are commonplace in industry. Progress came in fits and starts and occasionally, “I would find that I got nothing done in a particular week because I went to war on one thing.” The result of treading those deep waters though was an enhanced understanding of the problem at hand. “The learnings I have come from early disastrous planning,” he observed adding that, “It was a forcing factor to grow.” - -Thrashing about in deep waters can also yield deeper insights. While working on CrashSimulator, one such insight was that, in this instance, the tool itself was not as important as the technique that makes the tool possible. Preston and his co-authors were able to codify what they called [Simulating Environmental Anomalies (SEA)](https://ssl.engineering.nyu.edu/papers/moore_crashsim_issre2019.pdf), a technique that “utilizes evidence of one application’s failure in a given environment to generate tests that can be applied to other applications, to see whether they suffer from analogous faults.” The SEA technique and its applications were covered in a paper that ended up taking top honors at the [30th IEEE International Symposium on Software Reliability Engineering](https://www.computer.org/csdl/proceedings/issre/2019/1hrLbO6MnUQ), and featured an opening analogy based on Pokemon. - -According to Preston’s PhD advisor, [Prof. Justin Cappos](https://ssl.engineering.nyu.edu/personalpages/jcappos/), “Preston has a knack for communicating ideas in a way that makes them accessible and entertaining. I was not at all surprised that he excelled while teaching several classes during his PhD studies at NYU.” - -#### At home in NYC - -While Preston’s new position may take him down some unfamiliar roads, at least the overall setting will be one he finds familiar, as Anaconda does have offices in New York. “I have fallen completely in love with NYC. It’s a huge change from Bluff City, TN. There is a sense of pace and energy here that makes you want to get things done. The hustle is real. It was a good environment for me.” He adds that the “hustle” he mentions was reflected in his lab mates as well. “The energy is contagious. You want to do well.” - - -In offering some parting advice to his lab mates, Preston states they should be using that energy. “Have a strong bias towards action. Take a swing at it and, if it's a disaster, there will be other options. I never felt pressure to keep driving down a path that wasn’t working.” - diff --git a/_posts/2022-09-09-uptane-scudo.md b/_posts/2022-09-09-uptane-scudo.md deleted file mode 100644 index 53c5ce56..00000000 --- a/_posts/2022-09-09-uptane-scudo.md +++ /dev/null @@ -1,44 +0,0 @@ ---- - -layout: article -title: "Scudo: End-to-End Vehicle Software Security from Uptane and in-toto" -subnav: blog -comments: true -tagline: "This spring, the [Uptane](https://uptane.github.io/) project introduced Scudo, a comprehensive secure framework that can deliver end-to-end software supply chain protection for computing units on automobiles." -author: 'Lois Anne DeLong' -categories: - - 'Uptane' - ---- - -This spring, the [Uptane](https://uptane.github.io/) project introduced Scudo, a comprehensive secure framework that can deliver end-to-end software supply chain protection for computing units on automobiles. Named after the Italian word for shield, Scudo integrates the compromise resilience and secure delivery mechanisms of Uptane with the proven supply chain security mechanism of in-toto. The resulting framework offers a timely response to threats against an emerging attack surface—automotive electronic control units or ECUs— at a point in time when both industry standards and government regulations are calling for improved protection of the software lifecycle across all industries. - -As described in [Scudo: A Proposal for Resolving Software Supply Chain Insecurities in Vehicles](https://uptane.github.io/papers/scudo-whitepaper.pdf), an Uptane whitepaper originally published May 22, 2022, and updated in July, the framework ensures that the images being uploaded by the Uptane framework are free of tampering. It can offer this assurance because of the signed metadata in-toto generates at each step in the development, packaging, testing and delivery of a software image. This metadata attests to the authenticity of the image and allows a client to verify who performed each step and in what order. If the signature or the information in the metadata is different from what was intended, Scudo will reject it. - -Scudo brings to the solution of supply chain insecurity two established open source technologies. For the past five years, Uptane has been a mainstay in secure software update systems used by a number of original equipment manufacturers (OEMs). While in-toto is new to the automotive space, it has been widely integrated into open source projects, such as [Sigstore](https://docs.sigstore.dev/cosign/attestation/), [GitLab](https://github.com/in-toto/friends/tree/main/gitlab), and [Reproducible Builds](https://github.com/in-toto/friends/tree/main/rebuilderd). Even SolarWinds, which ignited much of the recent concern about supply chain vulnerabilities when its software update mechanism inadvertently introduced malware that led to massive data breaches, [adopted in-toto as part of its re-designed security system](https://www.solarwinds.com/resources/whitepaper/setting-the-new-standard-in-secure-software-development-the-solarwinds-next-generation-build-system/delivery). in-toto is also a core part of [SLSA](https://github.com/in-toto/friends/tree/main/slsa), the industry’s leading software supply chain best practices framework. - -## How Scudo works - -The whitepaper offers a very high-level conceptual design of how Scudo works (see diagram). A supply chain orchestrator signs the image to be uploaded, and its associated in-toto metadata, and maps the relevant metadata to a corresponding layout. As the name implies, the layout defines the steps of a software supply chain that must be carried out in order to write, test, package and distribute your software. Put simply, the metadata says what was done while the layout states what was supposed to be done. Agreement between the two indicates the client is receiving a secure copy of the requested software. - - - -**Figure 1:** *Scudo modifies the standard Uptane structure by introducing in-toto metadata into one of the repositories. In this example, we have assumed in-toto metadata is stored alongside the images in the Image repository.* - -Scudo is built on a solution successfully implemented by [Datadog](https://www.datadoghq.com/) that uses both in-toto and [The Update Framework (TUF)](https://theupdateframework.io/), Uptane’s parent framework. The Datadog solution is used to secure hundreds of integrations for its Agent—a product that collects metrics for analysis of host machines. (You can read about this in-toto/TUF collaboration in a [Datadog blog](https://www.datadoghq.com/blog/engineering/secure-publication-of-datadog-agent-integrations-with-tuf-and-in-toto/), which was written by Scudo team member [Trishank Karthik Kuppusamy](https://www.linkedin.com/in/trishank-karthik-kuppusamy/). The specification was also submitted and approved as an [in-toto enhancement (ITE)](https://github.com/in-toto/ITE/blob/master/ITE/2/README.adoc), -and serves as guidance for implementing compromise-resilient continuous integration/continuous pipelines). - -According to Kuppusamy, “Integrating in-toto with TUF allowed us to build a compromise-resilient CI pipeline three years before the SolarWinds incident. The combination protected our Agent integrations against man-in-the-middle attacks anywhere in the software publication lifecycle, from the moment our developers release new or updated integrations to when our end-users install them. The combination of in-toto and TUF uses defense in depth to an extent where these attacks can be rendered infeasible.” - -As shown in the diagram below, Scudo stores in-toto metadata in the Image repository, one of two repositories employed in Uptane’s architecture. The dual repository design is a major difference between Uptane and TUF, and therefore determining which should be used to store in-toto metadata was one of the first design decisions the Scudo team addressed. While either repository can actually be adapted to this purpose, the Image repository, which is signed by offline keys, offers the more secure option. - -## Creating Defense-in-Depth with other open source strategies - -In presenting Scudo, the whitepaper authors—Kuppusamy; [Aditya Sirish A Yelgundhalli](https://engineering.nyu.edu/student/aditya-sirish-yelgundhalli), [Marina Moore](https://cyber.nyu.edu/profile/marina-moore/), [Lois Anne DeLong](https://www.linkedin.com/in/lois-delong-0706a128/), and [Justin Cappos](https://ssl.engineering.nyu.edu/personalpages/jcappos/) of New York University; and [Santiago Torres-Arias](https://www.cerias.purdue.edu/site/people/faculty/view/3153) of Purdue University—acknowledge a number of other open source strategies that could perhaps be adapted to an automotive space. The paper presents an overview of these technologies, which include [Sigstore](https://www.sigstore.dev/), [Grafeas](https://grafeas.io/), and [SBoM](https://www.ntia.doc.gov/files/ntia/publications/sbom_minimum_elements_report.pdf) formats, such as [SPDX](https://spdx.dev/) and [Cyclone DX](https://github.com/CycloneDX), and shows how Scudo, through in-toto, offers some distinct advantages over these choices. For one thing, in-toto has the ability to cryptographically track artifacts through the full supply chain. It also includes primitives that serve to define and enforce policies. - -Yet, the authors also point out that, like both Uptane and in-toto, Scudo can work as a complementary element in other systems. "Since its inception, in-toto was intended to close the gap between software repositories and the developer,” explains Torres-Arias, who was the lead developer of in-toto while completing his doctorate at New York University. “However, to provide some symmetry, other solutions like TUF and Uptane work fantastically at closing the gap between software repositories and software users. Because of this, these solutions are great when used together." A multi-layered framework in which Scudo is paired with other solutions, can help to create “Defense-in-Depth” which as defined by [US NIST IR8183](https://nvlpubs.nist.gov/nistpubs/ir/2017/NIST.IR.8183.pdf) is “the application of multiple countermeasures in a layered or stepwise manner” to “ensure that attacks missed by one technology are caught by another.” - -## Next steps for Scudo -The Scudo team is planning to publish a more formal specification of the framework as a [Proposed Uptane Revisions and Enhancements (PURE)](https://github.com/uptane/pures). The PURE document will propose changes to the [Uptane Standard](https://nvlpubs.nist.gov/nistpubs/ir/2017/NIST.IR.8183.pdf) that could expand Scudo’s use as a defense against the software supply chain threats in real-world applications. This expanded version of the Scudo specification will more closely examine the unique demands of the automotive industry, such as dealing with large and diverse codebases, and the reality that ECUs vary widely in terms of bandwidth and other resources. - -According to Prof. Justin Cappos “It is essential that a viable software supply chain solution exists before an incident like SolarWinds impacts the automotive community, where it could cost many lives.” diff --git a/_posts/2023-01-05-gsoc.md b/_posts/2023-01-05-gsoc.md deleted file mode 100644 index da9d8a05..00000000 --- a/_posts/2023-01-05-gsoc.md +++ /dev/null @@ -1,52 +0,0 @@ ---- - -layout: article -title: "Adventures in Open Source: Recognizing SSL’s GSoC ‘22 Contributors" -subnav: blog -comments: true -tagline: "This summer, the Secure Systems Lab welcomed four first-time contributors to the Google Summer of Code program." -author: 'Lois Anne DeLong' -categories: - - 'in-toto' - - 'TUF' - ---- - - - -**Photo Caption:** *The 2022 GSoC contributors and their mentors show thumbs-up to a productive summer. Clockwise from left: Lakshya Gupta, Lukas Pühringer, Pradyumna Krishna, Aditya Sirish A Yelgundhalli, Santiago Torres-Arias, Abhisman Sarkar. Missing: Lenery Chen. -.* - -This summer, the [Secure Systems Lab](https://ssl.engineering.nyu.edu/) welcomed four first-time contributors to the Google Summer of Code program. Lakshya Gupta, Pradyumna Krishna, and Lenery Chen worked on projects to improve the lab’s software supply chain project [in-toto](https://in-toto.io/), while Abhisman Sarkar tackled a version management issue for [The Update Framework](https://theupdateframework.io/) (TUF), a framework that delivers secure software updates for repositories, and for programs running in the cloud. - -For contributors and mentors alike, the summer appears to have been beneficial. Abhisman describes his GSoC time as “an incredible learning opportunity for people getting started with open source,” and adds, “I've learned a lot from my mentors regarding good code practices, and the knowledge gained up until now will be invaluable.” Gupta chimes in that he found the in-toto community “warm and welcoming.” And, mentor Lukas Pühringer, a researcher and engineer for NYU’s Secure Systems Laboratory described the experience of working with his student as “fun and fruitful.” - -At the end of their summer sojourn, we reached out to our contributors and asked them to share some of the lessons they learned along the way. - -[Abhisman Sarkar](https://www.linkedin.com/in/abhisman-sarkar-0398121ab/?trk=public_profile_browsemap&originalSubdomain=in) is an undergraduate at Siksha 'O'​ Anusandhan University in Bhubaneswar, India and was this summer’s lone contributor to the TUF project. His research tackled a significant problem for a number of open source projects: how to migrate to newer releases of a software package without worrying about version compatibility. His project, which was mapped out in a [TAP](https://github.com/theupdateframework/taps/blob/master/tap14.md), or TUF Augmentation Proposal, required two specific changes to the TUF specification: the way a repository manages specification versions, and the client update process itself. These modifications can simplify how clients find and access TUF metadata at the highest specification version that is compatible with their implementations. - -Abhisman shared that he was a “newbie” when it came to TUF, and even though he was “somewhat familiar” with other open source tools, he still acknowledged that “getting into open source was somewhat daunting to me.” What drew him to the TUF project was that it used Python, and “the use of a familiar language eased my tensions.” As the project moved forward, he found his skills with the language developing to the point where he could joke about calling himself “a Pythonista.” He also pointed out that he got to know and apply what he learned about Git and Github and came to appreciate how great a tool Git is in source code management. Despite some initial anxieties, he found that acting immediately on what he was learning was also “the most fun part” of the GSoC experience. - -His work was conducted under mentors [Marina Moore](https://www.linkedin.com/in/marina-moore-5a7242105/) of NYU and Zack Newman of Chainguard. - -For [Lakshya Gupta](https://www.linkedin.com/in/lakshya806/?originalSubdomain=in), a student at Birla Institute of Technology and Science in South Goa, India, choosing to work on modifying the in-toto [Jenkins plug-in](https://plugins.jenkins.io/in-toto/) had a lot to do with how the project fit with other research he had performed to date. He also saw it as an opportunity to deepen his knowledge of Jenkins and, on a broader scope, learn more about how CI/CD (Continuous Integration and Continuous Delivery) processes work. - -The project Lakshya took on can enable users to generate [in-toto attestations](https://github.com/in-toto/attestation/blob/main/README.md) with [SLSA provenance](https://slsa.dev/provenance/v0.1) metadata. The additional information that can be stored in the provenance format– -including the ID of the builder, the source that triggered the build, and its start and end times– -can deliver enhanced security through greater transparency. As documented in the [blog](https://summerofcode.withgoogle.com/programs/2022/projects/mR4u5su7) he shared with GSoC, Lakshya’s contributions were two-fold. First, he updated the [in-toto-java library](https://github.com/in-toto/in-toto-java/blob/master/README.md) containing the model code for SLSA Provenance to support v0.2. And, second, he updated the Jenkins plugin so the in-toto library could generate SLSA provenance for builds performed in Jenkins instances. - -Lakshya confessed that one of the hardest challenges of the project was “overcoming imposter syndrome and learning what was required.” After discussing the problems with his mentor, [Aditya Sirish A. Yelgundhalli](https://engineering.nyu.edu/student/aditya-sirish-yelgundhalli), a Ph.D. candidate at NYU Tandon, “we found a way to proceed by dividing the project into smaller chunks and completing one thing at a time.” Another hurdle he cited was “upgrading the project to [JDK version 11](https://www.oracle.com/java/technologies/javase/jdk11-archive-downloads.html), a process that included a lot of back-and-forth communication and code changes to the plugin repository.” - -[Pradyumna Krishna](https://www.linkedin.com/in/pradyumnakrishna/), who completed his undergraduate degree from the [Deen Dayal Upadhyaya College](https://dducollegedu.ac.in/) of the University of Delhi in 2022, spent his summer working on a project called the [Dead Simple Signing Envelope](https://github.com/secure-systems-lab/dsse#readme). DSSE is a novel standard for signing arbitrary data. The actual tasks he performed over the summer included implementing the protocol for the creation and verification of DSSE signatures, and developing “a working, fully tested, and documented DSSE signature wrapper for in-toto metadata, that may be used interchangeably with the existing signature wrapper.” - -Like Abhisman, Pradyumna liked that the project was based on Python, a programming language with which he had experience. Throughout the summer, he noted he was able to build on that experience, learning “some new things, like how the release cycle works in software development, and how to maintain code quality.” But, beyond that, he was drawn to developing DSSE because “it deals with software security and encryption.” Particularly challenging, but also energizing, was that the implementation took place in a library that in-toto shares with its sister project, TUF, and the goal was to accommodate both projects. Pradyumna’s primary mentor, Pühringer, notes that he “implemented a security critical enhancement in a library that is shared by two security frameworks and used in production to secure real world software supply chains.” - -Perhaps the most valuable skills Pradyumna developed over the summer were not technical, but rather interpersonal. “I was not comfortable communicating with people in English,” he explained, adding that a lack of English proficiency “sometimes made it hard for me to express a scenario.” But, he adds, “with the help and support of mentors, it never set me back.” - -Last, but certainly not least, Lenery Chen, a master’s degree student at Southern University of Science and Technology in Shenzhen, China, described his work this summer as ”a small step, but it is these small steps that build up supply-chain security.” Specifically, he, like Lakshya, was involved in developing SLSA provenance support, and like Pradyumna, his work required developing a library for DSSE. But, Lenery’s focus was on using the [Rust implementation](https://github.com/in-toto/in-toto-rs/blob/master/README.md) of in-toto to apply these tools to the [Rebuilderd project](https://github.com/kpcyrd/rebuilderd/blob/main/README.md), an open source verification system that affirms any package built in an identical environment must be identical to the original. - -Though Lenery was new to GSoC, he observed that he had participated for three years “in a similar project named [Open Source Promotion Plan](https://summer-ospp.ac.cn/), hosted by the Institute of Software of the Chinese Academy of Sciences.” He was also not completely new to in-toto, as he reported applying for an in-toto project last year, but “not approaching the proposal seriously.” So as not to make that mistake again, “this time I worked with in-toto before GSoC began by fixing some minor issues.” - -Lenery came to the GSoC program hoping to gain experience in “designing a larger real project,” involving “multi-person cooperative development and communication.” Looking back on the experience, he commented that it was “inspiring” to work on a “project used by some giant companies.” - -All of the participants mentioned that they would like to continue as contributors in the future should their academic and work schedules permit. Abhisman noted he wishes “to work on more TAPs and build newer features for TUF implementations. The people I've been introduced to and the learning culture was amazing and I do not want to let go of that.” Lenery adds, “ I will continue to advance the development, until in-toto-rs becomes stable. It's my responsibility to finish what I started.” diff --git a/_posts/2023-03-17-scudo-accepted.md b/_posts/2023-03-17-scudo-accepted.md deleted file mode 100644 index 960a3712..00000000 --- a/_posts/2023-03-17-scudo-accepted.md +++ /dev/null @@ -1,24 +0,0 @@ ---- -layout: article -title: Uptane Community Endorses Scudo as Automotive Software Supply Chain Security Solution -subnav: blog -comments: true -tagline: "Last year, the Uptane project released a whitepaper that introduced Scudo, an open framework that utilizes in-toto to secure automotive software supply chains.We are now excited to share that Scudo has been accepted by a simple majority of the active members of the Uptane community." -author: 'Lois Anne DeLong and Aditya Sirish' -categories: - - 'in-toto' - - 'Uptane' ---- - - -Last year, the Uptane project released a whitepaper that introduced Scudo, an open framework that utilizes in-toto to secure automotive software supply chains. This was followed shortly afterwards by the development of a Proposed Uptane Revisions and Enhancements (PURE) document titled “Scudo: Addressing Software Supply Chain Security in Uptane.” Referred to as PURE 3, this document included a more detailed discussion of how the framework can be integrated into automotive software supply chains. - -We are now excited to share that the Scudo PURE has been accepted by a simple majority of the active members of the Uptane community. In this context, the “accepted” status indicates that the Uptane community recommends and stands behind Scudo as the framework of choice for automotive software supply chain security guarantees. - -Commenting on the new status of Scudo, Trishank Kuppusamy, a Staff Security Engineer at Datadog, sees it as another step forward for an industry “that pioneered software supply chain security by co-designing and adopting Uptane.” Kuppusamy, a long time contributor to both Uptane and in-toto, adds, “now with its extension, Scudo, the automotive industry will gain the same kind of defense-in-depth security that was first seen with the [Datadog Agent integrations](https://www.datadoghq.com/blog/engineering/secure-publication-of-datadog-agent-integrations-with-tuf-and-in-toto/). With Uptane, we were able to safely give a robot on the cloud the power to install different firmware on different vehicles. Now, incorporating its sibling technology, in-toto, to create Scudo makes it possible to add the entire end-to-end, cryptographically verifiable history of how that firmware was developed, tested, and released in the first place. The addition of in-toto means that you can detect meddler-in-the-middle attacks such as [SUNBURST](https://www.reversinglabs.com/blog/sunburst-the-next-level-of-stealth) before any tampered firmware can be installed on vehicles. This is critical for meeting the Biden-Harris administration’s [EO 14028](https://www.federalregister.gov/d/2021-10460/p-54) and [strategy](https://www.federalregister.gov/d/2021-10460/p-54) for improving our national cybersecurity.” - -“Protecting the integrity of the software supply chain will soon become a priority for automotive stakeholders,” adds André Weimerskirch, a vice president with global responsibility for product cybersecurity at Lear, and a member of the Uptane Steering Committee. “The automotive industry is continuously improving security and raising the bar by increasing protection for the software supply chain via Scudo, is the next logical step after deploying secure software over the air strategies (such as Uptane) and establishing a comprehensive software bill of materials (SBOM)”. - -Professor Justin Cappos of NYU Tandon, who is also on the Uptane Steering Committee, echoes the sentiments above. “This is an exciting new capability for the automotive sector which directly comes from the use of open standards,” he observes. “In this case, two communities have worked hard to ensure they interoperate seamlessly. This is one of the core benefits of open standards — the community members can do the work to integrate technologies so that the result is greater than the sum of their parts.” - -PURE 3 can be found in the [Uptane PUREs repository](https://github.com/uptane/pures/blob/main/pure3.md). For a higher level overview of Scudo, the latest version of the whitepaper can be found [here](https://uptane.github.io/papers/scudo-whitepaper.pdf). \ No newline at end of file diff --git a/_posts/2023-10-20-uptane-siterevision.md b/_posts/2023-10-20-uptane-siterevision.md deleted file mode 100644 index 00c4ed79..00000000 --- a/_posts/2023-10-20-uptane-siterevision.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -layout: article -title: "Uptane Website 2.0: Project “Front Door” Gets a Much Needed Upgrade" -subnav: blog -comments: true -tagline: "In less than a decade, the Uptane secure software update project has made a significant impact, not only on the automotive world for which it was conceived, but increasingly on other industrial sectors where updates are routinely received over the air." -author: 'Lois Anne DeLong' -categories: - - 'Uptane' ---- - -In less than a decade, the Uptane secure software update project has made a significant impact, not only on the automotive world for which it was conceived, but increasingly on other industrial sectors where updates are routinely received over the air. As the number of potential uses for Uptane has multiplied, it has become more important than ever to help users securely and efficiently find the data needed to successfully integrate the framework into their operations. Uptane’s website has a critical role to play in providing this data but, unfortunately, in its initial incarnation, it was not particularly easy to navigate. - -On October 16, 2023, the Uptane community released Version 2.0 of its [website](https://uptane.github.io/), which addresses many of the limitations of the previous site. The new version features a cleaner and more contemporary design, and a more user-friendly architecture. Links to important documents, such as the most recent incarnation of the [Uptane Standard for Design and Implementation](https://uptane.github.io/docs/standard/uptane-standard), can be accessed directly from the home page, as can a [one-page list](https://uptane.github.io/docs/learn-more/getting-started) of steps for implementing the framework. The new website also provides a blogging spot to provide a public forum for issues critical to the advancement of the technology. - -Credit for the new site is largely due to the efforts of [Abhijay Jain](https://www.linkedin.com/in/abhijayjain007/), a student at Guru Gobind Singh Indraprastha University in Delhi, India, who took on the site reconstruction as his project for the 2023 [Google Summer of Code](https://summerofcode.withgoogle.com/programs/2023). Among the innovations Jain brought to the project was utilizing [Docusaurus](https://docusaurus.io/) to construct the site as opposed to using the existing Jekyll framework. - -To ensure the community had sufficient input into the revision project, a number of its members worked with Jain over the summer. In addition to the “mentors of record,” Lois Anne DeLong of New York University and Philip Lapczynski of Renesas, Jain worked with Jon Oster of Toradex, and Uptane Steering Committee members Ira McDonald of High North and Dr. Justin Cappos of NYU Tandon. - -The Uptane community welcomes feedback on the new design. Feel free to send suggestionsto uptane-standards@googlegroups.com. - diff --git a/_posts/2024-07-01-contributions-to-git.md b/_posts/2024-07-01-contributions-to-git.md deleted file mode 100644 index 3b99e914..00000000 --- a/_posts/2024-07-01-contributions-to-git.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -layout: article -title: "Our Contributions to Git" -subnav: blog -comments: false -author: 'Justin Cappos' ---- - -Content coming soon. diff --git a/_posts/2024-07-01-contributions-to-reproducible-builds.md b/_posts/2024-07-01-contributions-to-reproducible-builds.md deleted file mode 100644 index d4a09c9d..00000000 --- a/_posts/2024-07-01-contributions-to-reproducible-builds.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -layout: article -title: "Our Contributions to Reproducible Builds" -subnav: blog -comments: false -author: 'Justin Cappos' ---- - -Content coming soon. diff --git a/_test/yamale_schema.yml b/_test/yamale_schema.yml index e9618615..22f4d5ff 100644 --- a/_test/yamale_schema.yml +++ b/_test/yamale_schema.yml @@ -48,8 +48,6 @@ project: site: str(required=False) status: include('status') description: list(str()) - products: str() - people: list(any(include('person'), include('proj_person'))) tags: list(include('tag')) proj_person: diff --git a/blog.html b/blog.html deleted file mode 100644 index 5e84529f..00000000 --- a/blog.html +++ /dev/null @@ -1,19 +0,0 @@ ---- -title: SSL Blog -subnav: blog -permalink: /blog/ -layout: default ---- - -
-
- - - diff --git a/papers/abc-material.zip b/papers/abc-material.zip deleted file mode 100644 index 49b8c08a..00000000 Binary files a/papers/abc-material.zip and /dev/null differ