A Need-based Assessment for Building a National Cyber Security Workforce
Seymour Goodman, Georgia Institute of Technology, United States
Democratization in Science and Technology through Cloud Computing
Ivona Brandić, Vienna UT, Austria
Model-driven Development of Multi-View Modelling Tools - The MUVIEMOT Approach
Dimitris Karagiannis, University of Vienna, Austria
Social Business Intelligence - OLAP Applied to User Generated Contents
Matteo Golfarelli, University of Bologna, Italy
Social Business Intelligence - OLAP Applied to User Generated Contents
Matteo Golfarelli, University of Bologna, Italy
Advanced Persistent Threats & Social Engineering
Edgar Weippl, University of Vienna, SBA Research, Austria
A Need-based Assessment for Building a National Cyber Security Workforce
Seymour Goodman
Georgia Institute of Technology
United States
Brief Bio
Seymour (Sy) E. Goodman is Professor of International Affairs and Computing, jointly at the Sam Nunn School of International Affairs and the College of Computing at the Georgia Institute of Technology. He serves as Co-Director of both the Georgia Tech Information Security Center (GTISC) and the Center for International Strategy, Technology and Policy (CISTP).
Prof. Goodman's research interests include international developments in the information technologies (IT), technology diffusion, IT and national security, and related public policy issues. Areas of geographic interest include the former Soviet Union and Eastern Europe , Latin America , the Middle East , South and Southeast Asia , and parts of Africa . Earlier research had been in areas of statistical and continuum physics, combinatorial algorithms, and software engineering. Current work includes research on the global diffusion of the Internet and the protection of large IT-based infrastructures
Immediately before coming to Georgia Tech, he was Director of the Consortium for Research on Information Security and Policy (CRISP) at the Center for International Security and Cooperation, with an appointment in the Department of Engineering Economic Systems and Operations Research, both at Stanford University; and Professor of MIS and a member of the Center for Middle Eastern Studies at the University of Arizona. Earlier tenured and visiting appointments have been at the University of Virginia (Applied Mathematics, Computer Science, and Soviet and East European Studies), Princeton University (Mathematics, and the Woodrow Wilson School of Public and International Affairs), and the University of Chicago (Economics).
Prof. Goodman is Contributing Editor for International Perspectives for the Communications of the ACM , and has served with many government, academic, professional society, and industry advisory and study groups. His research pursuits have taken him to all seven continents and over 80 countries, and have included testimony before legislative bodies and Ministerial-level briefings. He is currently principal investigator on two large grants from the National Science Foundation and the MacArthur Foundation.
Prof. Goodman was an undergraduate at Columbia University , where he started as an aspiring English major, and obtained his Ph.D. from the California Institute of Technology, where he worked on problems of applied mathematics and mathematical physics.
Abstract
In the United States alone well over a million organizations have become so dependent on cyberspace that their vital interests are now vulnerable to attack, accidents, and design failures that may compromise those interests. Many experts believe the situation is getting worse; that new vulnerabilities are being pumped into cyberspace, and that the bad guys are coming up with more sophisticated and scalable attacks faster than the good guys are coming up with improved defenses. The technical R&D pipelines do not show much promise for generating solutions that will provide discernable, measureable, readily and massively scalable improvements in cyber security for enormous populations of users. Nor is there much expectation that a broadly operational engineering science of cyber security, nor a set of voluntary standards and calls for information sharing, nor a set of government laws and enforcing institutions, will achieve this end any time soon.
Cyberspace is thus an environment where all dependent organizations are vulnerable and at risk. Even NSA admits to its inability to guarantee its own information security. But not all users are equally vulnerable. There are a multitude of products, procedures, standards, and policies that, if appropriately used, can make some users safer and more secure than others in cyberspace. But it takes knowledgeable people to bring these possibilities to bear, and to sustain and update their use. However, many organizations cannot or will not invest in many or any full time cyber security employees. Since millions of organizations worldwide are largely responsible for their own cyber security, this implies a huge workforce need and shortfall. Many organizations may be dependent on personnel who are not full time cyber security professionals to perform security functions or to be able and knowledgeable enough to obtain needed training, products, and services from outside their organizations.
A premise of this presentation is that the primary bearer of risk when things go wrong in cyberspace is the organization that has become so dependent on computer-communications systems, not the hardware and software in the computer-communications systems. There are an enormous number and variety of such organizations in national, state, and local governments, in the business and educational sectors, and arguably has come to include most of the organizations that have a payroll, engage in on-line transactions, have their intellectual property and other vital information on computers, or are otherwise strongly reliant on their presence on the worldwide web. They have many different forms of dependencies and risk tolerances. Their customers and the users of their products and services make up extended organizations of dependencies and risk.
It will be useful to distinguish need, demand, and supply in the context of the cyber security workforce. Following [NRC 2013]: “Need is the number (and skill mix) of cybersecurity workers that are required to provide satisfactory cybersecurity (a judgment that will vary according to who makes the assessment). Demand is expressed by the desired capabilities stated in job descriptions, the number of such positions that are created and filled, and the salaries offered to those who have those abilities. Demand will fall short of national or societal need to the extent that cybersecurity is a public good—that is, organizations will invest to meet their own requirements but not necessarily to achieve societally desirable overall requirements. Demand can also fall short of an organization’s own needs if (1) the organization lacks the required resources or (2) an organization underestimates the threats it faces. Supply is the number of available qualified workers willing to fill positions, and is a function of the visibility and attractiveness of cybersecurity occupations, the availability of appropriate training and education, and (as in all fields) the overall labor market in which potential workers respond to salary and other signals about demand.”
Democratization in Science and Technology through Cloud Computing
Ivona Brandić
Vienna UT
Austria
Brief Bio
Dr. Ivona Brandic is Assistant Professor at the Distributed Systems Group, Information Systems Institute, Vienna University of Technology (TU Wien). Prior to that, she was Assistant Professor at the Department of Scientific Computing, Vienna University. She received her PhD degree from Vienna University of Technology in 2007. From 2003 to 2007 she participated in the special research project AURORA - Advanced Models, Applications and Software Systems for High Performance Computing and the European Union's GEMSS - Grid-Enabled Medical Simulation Services project. She is involved in the European Union's SCube project and she is leading the Austrian national FoSII - Foundations of Self-governing ICT Infrastructures project funded by the Vienna Science and Technology Fund (WWTF). She is management committee member of the European Commission's COST Action on Energy Efficient Large Scale Distributed Systems. From June to August 2008 she was visiting researcher at the University of Melbourne, Australia.
In 2011 she received the Distinguished Young Scientist Award from the Vienna University of Technology for her HALEY project on Holistic Energy Efficient Hybrid Clouds. Her interests comprise Service Level Agreement and Quality of Service management in large scale distributed systems, autonomic computing, workflow management for scientific applications, and energy efficient large scale distributed systems (Cloud, Grid, Cluster, etc.). She published more than 50 scientific journal, magazine and conference publications and co-authored a text book on federated and self-manageable Cloud infrastructures. I. Brandic co-authored European Union's Cloud Computing report paving future research directions of the EU. In 2010 she chaired the International Conference on Utility and Cloud Computing held in Chennai, India. She has been serving more than 50 program committees (among others EuroPar, COMPSAC, CloudCom) and was invited reviewer of more than 10 international journals. In 2011 she edited two special issues for Future Generation Computer Systems (Elsevier) and Scientific Programming Journal (IOS Press). I. Brandic has been invited expert evaluator of the European Commission, French National Research Organization (ANR), National Science and Engineering Research Council Canada (NSERC) and Netherlands Organization for Scientific Research (NWO).
Abstract
Currently, many data centers are adopting Cloud Computing technology to achieve high performance and scalability for their applications while maintaining low costs. Service provisioning in the Cloud is based on a set of predefined non-functional properties specified and negotiated by means of Service Level Agreements (SLAs). Cloud workloads are dynamic and change constantly. Thus, in order to reduce steady human interactions, self-manageable Cloud techniques are required to comply with the agreed customers’ SLAs. In this talk we discuss flexible and reliable management of SLAs, which is of paramount importance for both, Cloud providers and consumers. Further, we discuss novel approaches for the assessment of the virtualization costs by discussing a real live example from the area of high performance computing. Both SLA management and systematic assessment of virtualization costs lead to new democratic processes that go far beyond traditional resource allocation.
Model-driven Development of Multi-View Modelling Tools - The MUVIEMOT Approach
Dimitris Karagiannis
University of Vienna
Austria
Brief Bio
Dimitris Karagiannis is head of the research group knowledge engineering at the University of Vienna.
His main research interests include knowledge management, modelling methods and meta-modelling.
Besides his engagement in national and EU-funded research projects Dimitris Karagiannis is the
author of research papers and books on Knowledge Databases, Business Process Management,
Workflow-Systems and Knowledge Management. He serves as expert in various international
conferences and is presently on the editorial board of Business & Information Systems Engineering
(BISE), Enterprise Modelling and Information Systems Architectures and the Journal of Systems
Integration. He is member of IEEE and ACM and is on the executive board of GI as well as on the
steering committee of the Austrian Computer Society and its Special Interest Group on IT
Governance. Recently he started the Open Model Initiative (www.openmodels.at) in Austria. In 1995
he established the Business Process Management Systems Approach (BPMS), which has been
successfully implemented in several industrial and service companies, and is the founder of the
European software- and consulting company BOC (http://www.boc-group.com), which implements
software tools based on the meta-modelling approach.
Abstract
Computer science and information system communities discuss the term „model“ in different ways, depending on which fundamentals they build upon. From there, requirements are derived which must be supported by concepts helping to transform modelling methods into modelling tools.
A lifecycle that assists this transformation process and which was developed in the Open Models Laboratory - www.omilab.org - is presented. It encompasses the phases create, design, formalize, develop and deploy.
Towards an Open Models Community it is essential to have in addition a common platform where models can be freely developed, managed and shared. This environment is available at the Open Models Laboratory.
Experiences from community work on more than 20 Best Practices – www.omilab.org/booklet - realised using the OMiLab lifecycle will be shown.
Social Business Intelligence - OLAP Applied to User Generated Contents
Matteo Golfarelli
University of Bologna
Italy
Brief Bio
Matteo Golfarelli received the Ph.D. degree for his work on autonomous agents in 1998 from the University of Bologna. Since 2005, he is an associate professor in the same University, teaching information systems, database systems, and data mining. He has published more than 90 papers in refereed journals and international conferences in the fields of pattern recognition, mobile robotics, multi-agent systems, and business intelligence that is now his main research field. Within this area, in the last 15 years he explored many relevant topics such as collaborative and pervasive BI, temporal Data Warehouses, physical and conceptual Data Warehouse design. In particular he proposed the Dimensional Fact Model a conceptual model for Data Warehouse systems that is widely used in both academic and industrial contexts. His current research interests include distributed and semantic data warehouse systems, and social business intelligence and open data warehouses. He joined several research projects on the above areas and has been involved in the PANDA thematic network of the European Union concerning pattern-base management systems.
Abstract
The huge quantity of information, talks, posts, and papers available on the web cannot be ignored by companies. Being aware in near-real time of hot topics and opinions about a product or a topic is strategic for taking better decisions. Unfortunately, this information is totally or partially unstructured, thus it is difficult to be exploited. Most of the commercial solutions are "closed" applications and most of the services are one-shot projects rather than stable monitoring systems that enable a limited exploitation of the information. Practitioners often refer to this family of tools as Opinion Mining software, Sentiment Analysis Software, or Brand Reputation Software. Many companies would prefer a solution that could be integrated in the enterprise information systems and that could be considered as yet another data flow to be included in the Business Intelligence platform and to be queried with the traditional tools that are well-known to the users.
Social business intelligence is the discipline of combining corporate data with user-generated content to let decision-makers improve their business based on the trends perceived from the environment. Setting up a Social BI architecture requires contributions by several areas of computer science such as Information Retrieval, Text Mining, Database, Ontology and Artificial Intelligence The keynote will describe the features of a Social BI architecture, it will survey the research issues related to it and it will go into details about database and big data issues that would allow to create BI like capabilities.
Social Business Intelligence - OLAP Applied to User Generated Contents
Matteo Golfarelli
University of Bologna
Italy
Brief Bio
Matteo Golfarelli received the Ph.D. degree for his work on autonomous agents in 1998 from the University of Bologna. Since 2005, he is an associate professor in the same University, teaching information systems, database systems, and data mining. He has published more than 90 papers in refereed journals and international conferences in the fields of pattern recognition, mobile robotics, multi-agent systems, and business intelligence that is now his main research field. Within this area, in the last 15 years he explored many relevant topics such as collaborative and pervasive BI, temporal Data Warehouses, physical and conceptual Data Warehouse design. In particular he proposed the Dimensional Fact Model a conceptual model for Data Warehouse systems that is widely used in both academic and industrial contexts. His current research interests include distributed and semantic data warehouse systems, and social business intelligence and open data warehouses. He joined several research projects on the above areas and has been involved in the PANDA thematic network of the European Union concerning pattern-base management systems.
Abstract
The huge quantity of information, talks, posts, and papers available on the web cannot be ignored by companies. Being aware in near-real time of hot topics and opinions about a product or a topic is strategic for taking better decisions. Unfortunately, this information is totally or partially unstructured, thus it is difficult to be exploited. Most of the commercial solutions are "closed" applications and most of the services are one-shot projects rather than stable monitoring systems that enable a limited exploitation of the information. Practitioners often refer to this family of tools as Opinion Mining software, Sentiment Analysis Software, or Brand Reputation Software. Many companies would prefer a solution that could be integrated in the enterprise information systems and that could be considered as yet another data flow to be included in the Business Intelligence platform and to be queried with the traditional tools that are well-known to the users.
Social business intelligence is the discipline of combining corporate data with user-generated content to let decision-makers improve their business based on the trends perceived from the environment. Setting up a Social BI architecture requires contributions by several areas of computer science such as Information Retrieval, Text Mining, Database, Ontology and Artificial Intelligence The keynote will describe the features of a Social BI architecture, it will survey the research issues related to it and it will go into details about database and big data issues that would allow to create BI like capabilities.
Advanced Persistent Threats & Social Engineering
Edgar Weippl
University of Vienna, SBA Research
Austria
Brief Bio
After graduating with a Ph.D. from the Vienna University of Technology, Edgar worked in a re-search startup for two years. He then spent one year teaching as an assistant professor at Beloit College, WI. From 2002 to 2004, while with the software vendor ISIS Papyrus, he worked as a consultant in New York, NY and Albany, NY, and in Frankfurt, Germany. In 2004 he joined the Vienna University of Technology and founded the research center SBA Re-search together with A Min Tjoa and Markus Klemen.Edgar R. Weippl (CISSP, CISA, CISM, CRISC, CSSLP, CMC) is member of the editorial board of Computers & Security (COSE) and he organizes the ARES conference.
Abstract
Social Engineering has long been a very effective means of attacking information systems. The term knowledge worker has been coined by Peter Drucker more than 50 years ago and still describes very well the basic characteristics of many employees. Today, with current hypes such as BYOD (bring your own device) and public cloud services, young professionals expect to use the same technology both in their private life and while working. In global companies teams are no longer geographically co-located but staffed globally just-in-time. The decrease in personal interaction combined with the plethora of tools used (E-Mail, IM, Skype, Dropbox, Linked-In, Lync, etc.) create new opportunities for attackers. As recent attacks on companies such as the New York Times, RSA or Apple have shown, targeted spear-phishing attacks are an effective evolution of social engineering attacks. When combined with zero-day-exploits they become a dangerous weapon, often used by advanced persistent threats. In this talk we will explore some attack vectors and possible steps to mitigate the risk.