International Telecommunication Union   ITU
عربي  |  中文  |  Español  |  Français  |  Русский
 Advanced Search Advanced Search Site Map Contact us Print Version
Home : ITU-T Home : Workshops and Seminars : Security
 ITU-T Workshop on "Addressing security challenges on a global scale"
 Geneva, Switzerland, 6 (Afternoon) to 7 December 2010 Contact: 


George Arnold (National Coordinator for Smart Grid Interoperability National Institute of Standards and Technology (NIST)): Cyber Security in the Smart Grid

The electric grid is one of the most complex and important infrastructures ever created, and is vital to modern quality of life and the economy. Generation of electricity is also a significant source of greenhouse gas emissions. The basic architecture of the grid has not changed much in 100 years, and use of information technology to increase efficiency and reliability has lagged behind other infrastructures such as telecommunication. Modernization of the grid is central to many nations’ efforts to address climate change and improve energy efficiency and reliability. The smart grid represents the integration of information and communications technologies into the existing power system to provide measurement and control needed for increased use of distributed and renewable generation, enabling dynamic management of demand as well as generation, improving reliability, and support for electric vehicles. Introduction of ICT technologies into the grid presents significant new cybersecurity challenges. This presentation will describe efforts led by the National Institute of Standards and Technology to address cybersecurity challenges for the smart grid.
Taieb Debbagh (Secretary General, Ministry of Industry, Trade and New Technologies, Department of Post, Telecommunications & New Technologies, Morocco): National Cybersecurity Management System

This contribution proposes global Governance answering the former needs expressed by the ITU. It is intended to present «NCSecMS», the "National Cybersecurity Management System", which is a guide for the development for effective National Cybersecurity. It ensures the implementation of a National Roadmap of Cybersecurity Governance, through the 4 following components:

1. "NCSec Framework" proposes five domains and 34 processes for covering main issues related to Cybersecurity on the National level, as the ISO 27002 for organization;
2. "NCSec Maturity Model", classifies "NCSec Framework" processes depending on their level of maturity;
3. "NCSec RACI chart" helps to define roles and responsibilities for the main stakeholders concerned by Cybersecurity in country;
4. "NCSec Implementation Guide" is a generalization of ISO 27001 and 27003 standards at the national level. It deals with best practices that organizations can refer to evaluate their readiness status.

This contribution ensures the implementation of a National Roadmap of Cybersecurity Governance, including a framework for the Best Practices and Maturity Model, to ensure global assessment for different issues related to National Cybersecurity.
Mikhail Kader (Systems engineer for Security, Cisco Systems, Russia): Securing the Public & Private Clouds

During this session we will discuss current cloud computing service delivery models. We will also analyze security threats and vulnerabilities related to cloud computing and how they should be addressed.
Scott Vanstone (Cryptographic expert, RIM): Security by design

Cryptography is the study of mathematical techniques related to aspects of information security such as confidentiality, data integrity, entity authentication and data origin authentication. Cryptography plays a fundamental role in securing information based systems. Often cryptography (and security in general) is an afterthought and as such it is bolted on after the overall system has been completed.

In this talk I will discuss the importance of designing cryptography in from the very start and provide examples were this has been the case and success achieved. I will also speak about the state-of-the-art in cryptography, why a large part of the world is moving in this direction and how we can provide this new technology on constrained platforms such as smart cards and smart phones.
Don Thibeau (Chairman and President, The Open Identity Exchange): Open Identity and Open Trust Frameworks

Citizen involvement via online interactions requires trust-trust by citizens that their identity data is protected by government websites, and trust by government websites that private data is accessed only by the citizen. The same is true of business to business transactions like those in the Open Identity Exchange OIX Email Attribute Trust Framework that certifies the legal and technical interoperability necessary for global identity providers like Google, Yahoo, AOL and others to exchange log in and other information. This talk describes the Open Identity Framework created to meet global business to business needs. It also addresses government to citizen applications like those of US government certification requirements of while meeting the privacy requirements of citizens. This talk will share updates on the status of OIX Trust Framework Working Groups in the telecommunications, research and internet identity markets.
Anil Saldhana (OASIS Co-chair, ID Cloud TC, OASIS ID Trust Steering Committee Member): Identity: Enterprise to the cloud

Enterprises have invested in solving Identity Management challenges for many years. While they have not fully conquered the field, they have to now deal with rapid advancement of Cloud Computing infrastructures, where different challenges exist. This talk will discuss the role of Identity as we move from the enterprise to the cloud.
Erik Andersen (Rapporteur, ITU-T SG 17): Use of public key infrastructure

Public-Key Infrastructures (PKI) is widely used for sure identification in many diverse areas, like e-government, banking, etc. New areas for the use of PKI are emerging.

Introduction to basic PKI principles
The presentation shortly introduces the basic PKI concepts, including asymmetric keys, certificates, digital signatures, certification authorities, trust anchor, certificate revocation lists, etc.

Use of PKI within Identity Management
Secure authentication is important part of Identity Management. This involves not only an established and functional PKI, but also administrative procedures for different aspects of PKI, such as those specified by the CA Browser Forum, the ETSI ESI activity, etc. The IETF PKIX group is also touching this area.

Use of PKI for IP Security (IPSec)
IPSec provides IP network-layer encryption. The standards define several new packet formats: the authentication header (AH) to provide data integrity and the encapsulating security payload (ESP) to provide confidentiality and data integrity. Key management is negotiated with the Internet Key Exchange (IKE). However, "man in the middle attack" is possible without the use of PKI.

The basic principles for IPSec and how PKI may be applied are considered.

Use of PKI for RFID applications
Within certain supply chains, e.g. the pharmaceutical supply chain, there are problems with counterfeit and compromised products. It is important to insure that the creator of a Radio Frequency Identification (RFID) tag is the one that it be assumed to be. One way is for the creator of the tag information to digital sign critical pieces information on the tag and then to ensure that the tag is none-detachable.

Information associated with an RFID tag may be retrieved using directory technology, which may also provide information necessary for verifying a digital signature on the RFID tag.

Use of PKI within cloud computing
Cloud computing imposes many security issues. Authentication of user of cloud computing is essential for protecting cloud computing resources from misuse. How PKI may be used in this area is discussed.
John Sabo (Director, Global Government Relations, CA Technologies): A Service and Functions-Based Reference Model for Data Privacy

This presentation will provide a detailed discussion of the Privacy Management Reference Model developed by the International Security Trust and Privacy Alliance (ISTPA) and contributed to the OASIS Privacy Management Reference Model (PMRM) Technical Committee, a committee affiliated with the OASIS IDtrust Member Section.

Unlike the information security discipline with which it is closely tied, there are no standards-based operational models enabling the development of privacy-compliant technical architectures.

This deficiency is increasingly visible as government and industry committees work to define data protection risks in Health IT, Smart Grid, and cloud environments.

To address this serious gap, the PMRM Technical Committee has been established in the OASIS standards organization. The work of the TC is based on the “Privacy Management Reference Model v. 2.0” published by the International Security, Trust, and Privacy Alliance (ISTPA) in late 2009. The Reference Model will serve as a template for developing operational solutions to privacy requirements, as an analytical tool for assessing the completeness of proposed solutions, and as the basis for establishing categories and groupings of privacy management controls and privacy-compliant architectures.

The OASIS Privacy Management Reference Model will:

• Define a set of privacy management services to support and implement privacy requirements at a functional level, where a “service” is a collection of related functions and mechanisms that operate for a specified purpose.

• Define a structured format for describing privacy management services and identify categories of functions that may be used in defining and executing the services.

• Establish an explicit relationship between security requirements and supporting security services (such as confidentiality, integrity and availability services) and the privacy management services.

This presentation will provide an introduction to the Reference Model and its relevance to rapidly-developing cloud, smart grid, health IT and similar networked infrastructures. It will address the privacy management and compliance barriers to the widespread deployment of these infrastructures, as identified in research studies and assessment efforts. It will discuss the objectives and deliverables of the new OASIS PMRM Technical Committee, including delivery of a set of operational privacy management services, syntactically-structured and logically related functions for each Service, and the development of relevant use cases.
Gregg Schudel (Technical Marketing Engineer, LISP, Cisco Systems, Inc.): Security Aspects of Locator/ID Separation Protocol

The current Internet routing and addressing architecture overloads the semantics of the IP address by using a single namespace that simultaneously expresses two functions about a device: its identity, and its location (how it attaches to the network). One very visible and detrimental result of this single-namespace concept is manifested in the rapid growth of the global (Internet) routing table as a consequence of multi-homing, traffic engineering (TE), non-aggregatable address allocations, and business events such as mergers and acquisitions. The Locator/ID Separation Protocol (LISP), currently under working-group development by the Internet Engineering Task Force (IETF), implements a new routing and addressing architecture that splits identity and location into their own namespaces. This yields advantages such as improved scalability of the routing system through greater aggregation in the location namespace, improved multi-homing efficiency, including ingress traffic engineering, simplified IPv6 transition, and improved endpoint mobility.

Deploying LISP has the potential to provide significantly useful security benefits, such as end-to-end session identification, including source location, spoofed packet protection, and ingress traffic control, including selective source push-back for DoS/DDoS protection.

Deploying LISP also presents potential risks, as it will require additional functionality to be implemented on security devices for them to be aware of or incapable of inspecting packets within the LISP encapsulation header. This presentation provides an overview of these and other security-related implications of deploying LISP from the perspective of the Enterprise.
Jon Shamah (European Sales Manager, eSecurity Enterprise Solutions, NETS): NemID: An Agile National eID

This paper describes the NemID – The Danish National eID program - currently deploying, and shows how a ‘light’ eID can be an advantage in stimulating user acceptance and building critical mass.

“An Agile National eID for Denmark”
• Background to NemID
• Deciding Factors
• Impact
• Evolution
• European Context
Heung Youl Youm (Vice-Chair, ITU-T SG 17): Privacy and security issues for Cloud computing service

Privacy is one of the most critical problems for providing the cloud computing service. In this presentation, various privacy threats will be identified and some security guidelines will be addressed. In addition, some recommendations of encryption and key management will be described for protecting user's privacy. Finally, legal risks will also be presented.
Debabrata Nayak (Director, Cloud Security, Huawei): Scalable key management solution for private cloud

Key management plays an important role in telecom industry as telecom equipments are interacting each other needs to be connected through shared secret key. The current problem faced in telecom industry is how to securely distribute the keys to the network elements. Also it is required how we will generate the keys and maintain the key lifetime. Also storing and validating the keys is also require considerable cost involved in it and the problem is bigger when we scaled to multiple network elements. As in cloud environment as the operator equipments are not in one place it is distributed and shared in multiple geographical region so scaling the key requires proper management. Also in some cases the same network will be shared by multiple operators. So in that case we need a scalable key management solution and try to fit that solution in private as well as public cloud scenario. In this talk we will cover how the key will be distributed in private cloud scenario considering the network element like DHCP,DNS and OSS and BSS network elements. Also the talk will cover public cloud scenario of how the media content will be distributed securely with generic key management solution and the control still lies in the hand of mobile operators.
David W. Chadwick (University of Kent, Canterbury): Cardspace in the cloud

The current CardSpace/Information Card design provides a very intuitive user interface for providing identity information to service providers. The user is presented with a set of card icons, and clicks on the card he or she wishes to send. However, the model is severely limited in that only a single card can be selected for any given transaction, and the types of authentication that are supported are very restricted. Many transactions typically require several cards to be presented e.g. a credit card (to pay), a club card (for a discount or points), and a personal card (for delivery address). Furthermore many systems support authentication methods that Information Cards do not currently support e.g. one time passwords, two factor authentication etc.

Allowing a user to select multiple cards from different card issuers presents a significant challenge since the user will typically be known by different identifiers at each of the card issuers. How can the service provider be assured that all the presented cards actually belong to the current user? How can the identity system collect multiple cards without requiring the user to authenticate to each of the card issuers in each session?

This presentation will describe an enhanced information card model which allows a user to click on several cards in a single transaction, whilst only requiring the user to authenticate once per session (instead of once per selected card). In order to facilitate this, the model proposes a new service called the Identity Aggregator. The presentation will further describe how we have mapped this model and its protocols onto existing standard protocols, in order to facilitate interoperability between multiple service providers and card issuers. The software is currently being built as part of the EC TAS³ project

The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 216287 (TAS³ - Trusted Architecture for Securely Shared Services).
Nir Kshetri (University of North Carolina, USA): Cloud computing and cybercrime

Cloud computing is a double-edged sword from the security standpoint. Despite its potential to provide a low-cost security, individuals and organizations may increase risks by storing sensitive data in the cloud. In this paper, we analyze how the cloud’s characteristics such as newness, nature of the architecture, and attractiveness and vulnerability as a cybercrime target may help upgrade criminal practices on the Internet to cybercrime2.0. We also investigate how the contexts provided by formal and informal institutions affect security issues associated with data in the cloud.
Mario Hoffmann (Head of Department "Secure Services & Quality Testing", Fraunhofer Institute for Secure Information Technology) and Werner Streitberger (Senior Research, Fraunhofer Institute for Secure Information Technology): WWRF – Cloud Implications to Security, Privacy, and Trust

In these days, Cloud Computing is the major outsourcing trend bringing all related technologies, services, and process aspects together in a mature and professional way. The term Cloud Computing refers to infrastructure, platforms, and software which can be rent as a service on demand in a very flexible and dynamic way. The Telco industry is one natural provider of such Cloud services. Some features, however, imply well-known as well as new challenges to security, privacy, and trust. This paper analyses these challenges for Telcos, identifies open issues, and discusses a research roadmap towards secure and trustworthy Cloud Computing for all participants. For an introduction to Cloud Security see also [CSA2009] and [ENISA2009].

One of the most important features in Cloud Computing is: Transparency. Cloud service consumers, such as companies, authorities, as well as private consumers, do not have to take care of where and how data is stored; it’s just somewhere in the Cloud and can be accessed from everywhere at any time. This main advantage is at the same time the root for challenging questions: Where is my data? Who has access to it? How can a Cloud service consumer monitor and control access to his resources? How can I guarantee compliance with national law, certified processes, and company’s security policies? Which jurisdiction is applicable?

An integral part of Cloud services is identity and access management. Here, Identity as a Service is just one aspect which covers services which offer from simple user provisioning to identity federation any kind of complexity service consumers need to enable their business processes. Identity in the Cloud also refers to a clear identification of a unique “object” within an ambiguous environment and its identity lifecycle as well as user centricity and data control in a decentralized environment. Finally, identity in the Cloud includes single-sign-on and usability. See for example [CSA2010], [KANTARA2010], and [OASIS2010].

Depending on where and how cloud services are realized and offered we distinguish private and public clouds. Cloud service consumers may even combine these opportunities to so called hybrid clouds. Here, complexity, interoperability, and the ability to change the cloud service provider easily are major issues as standardization is only in an early phase. In the Cloud, for service providers as well as service consumers it is most important to identify the individual protection goals and risks appropriately following a comprehensive taxonomy as proposed for example in [StRu2010].

From a research perspective the Cloud approach can be extended to restricted resources which are only temporarily available, such as mobile devices in a meeting room. Devices could share single features and dedicated resources for a specific time period taking advantage of additional features from the direct environment. Here, the term Cloud has to be re-defined for mobile application scenarios (see [KaFi2010]).

A promising candidate to solve the issue that encrypted data in the Cloud has to be decrypted first before it can be processed is fully homomorphic encryption. Here operations such as multiplication and addition can be realized on encrypted data which would eventually reach a much higher security level (see [SmVe2009]).

Trustworthy virtualization.
Finally, next generation Trusted Platform Modules (TPMs) are able to support virtualization on hardware layer. Since virtualization is a very important enabler of Cloud Computing this addresses issues related to untrustworthy transactions between Cloud entities on infrastructure, platform as well as software level. In environments with high security requirements TPMs might serve as trust anchors (see for example [HKH2009]).

The paragraphs above are only principle aspects motivated by dedicated examples which would be discussed systematically and in more detail in the full paper. The full paper would also embed the security discussion on Cloud into the overall vision of WWRF.
Victor Kutukov (Chairman of ITU-T Focus Group on Cloud): The latest activities on ITU-T Focus Group on Cloud Computing

ITU-T FG Cloud Computing has been established in Feb. 2010 at the last TSAG to identify the study subjects related to Cloud Computing for SGs in ITU-T. After the third FG, the FG meeting has successfully provided their output as a set of materials on Cloud Computing including Cloud Security. This presentation will introduce the latest activities on the FG especially focusing on Cloud Security.
Miho Naganuma (ISOG-J, Q.3 Rapporteur, ITU-T SG 17): Industry-wide approach: Raising awareness for ICT security infrastructure

In recent cybersecuiry situation, traditional security practices of sectors and sizes of organizations are no longer strong enough to hold against a growing number of targeted attacks which are getting even more serious than they were before. It is imperative to share consistent resources including information and technology in a broad range of areas at national level, local community level, and industry level to protect ICT infrastructure.

Under such levels of resource sharing, “information exchange” is getting the key issue for incident responses.

This presentation introduces the new industry-wide approach for information exchanges by Managed Security Service Providers (MSSP), one of major stakeholders for ICT infrastructure, to raise awareness and promote effective incident responses. It also addresses the issues for information exchange through practical activities and highlights challenges to developing countries.
Robert A. Martin (Principal Engineer, MITRE, CNIS Group): Vendor Neutral Security Measurement & Management with Standards

This presentation will explore how the Making Security Measurable standards being fostered by MITRE and others over the last 10 years are facilitating the use of automation to assess, manage and improve the security posture of enterprise security information infrastructures while also fostering effective security process coordination across the adopting organizations and creating a vendor and tool neutral environment for managing the security posture of an organization.

The basic premise of these efforts is that for any enterprise to measure and manage the security of their cyber assets they are going to have to employ automation. For an enterprise of any reasonable size that automation will have to come from multiple sources and so to make the finding and reporting issues consistent and composable across different tools there has to be an underlying set of standard definitions of the things that are being examined, reported and managed by the different tools. These standardization efforts are collectively referred to as the Making Security Measurable initiatives but subsets of them are used by for secure automation content. Additionally, several of them are being used as the basis for revamping Common Criteria version 4 and they are being utilized within the ITU standards as part of the ITU-T’s X.cybex, Global Cybersecurity Information Exchange Framework (CYBEX) family of standards.

Information security measurement and management, as currently practiced, is complex, expensive, and fraught with unique activities and tailored approaches. Solving the variety of challenges currently facing enterprises with regards to incident and threat management, patching, application security, and compliance management requires fundamental changes in the way vendor technologies are adopted and integrated.

Likewise, to support organizational discipline and accountability objectives while enabling innovation and flexibility, the security industry needs to move to a vendor neutral security management and measurement strategy that is agnostic to the specific solution providers while also flexible enough to work with several different solutions simultaneously.

The Making Security Measurable initiatives provide the foundation for answering today's increased demands for accountability, efficiency and interoperability without artificially constraining an organization's solution options.
Damir Rajnovic (FIRST SDO Liaison, FIRST): CSIRT, Information Sharing and You

This presentation has two main goals:

1) To showcase the current state of information exchange among CSIRTS and other teams handling security incidents.

2) To give practical and concrete examples of how participants can get involved and interact with various groups from the community.

In today's world attacks on computer systems and networks are constant and relentless. These attacks can result in direct compromise of an organisation's IT system or harmful acts on the organisation's customer base or affiliates (e.g., phishing). Whatever the case might be there are two main objectives that are always present in cross-CSIRT incident response- the first is to contact an individual or an organisation outside of your own organisation and the second is to exchange information in order for the incident to be handled.

CSIRTs around the world are accomplishing both goals routinely and on everyday basis. There are multiple groups and individual teams that are crucial in combating computer incidents but their work, and even their existence, is largely unknown outside the security community.

This presentation will discuss these teams and what they are doing, as well as what can be expected from them and how to interact with them.

That will be demonstrated through several, very common examples like Denial-of-Service attacks, phishing and compromised systems. In all these instances there are groups and methods that can help. Most importantly, this help can be provided fast and [it] is available to everyone.
Ian Bryant (EU NEISAS Project): Challenges in Sharing Security Information

The sharing of information about the security risks facing networks is self evidently beneficial to both government and industry. If a standardised mechanism could exist through which one organization can learn from the experiences, mistakes, and successes of another, without fear of exposing the organisation’s sensitivities to national security, competitors and the media, then every participant can improve their level of assurance.

Concepts of trust intrinsically therefore underpin the willingness of stakeholders to share information.

The presentation covers the challenges in engendering trust which have to be taken into account when developing structures and mechanisms for sharing security information, and explores the work done by the MS3i and NEISAS Projects in this area.
Takeshi Takahashi (NICT) and Youki Kadobayashi (NICT) : Ontological Approach toward Cybersecurity in Cloud Computing

Widespread deployment of the Internet enabled building of an emerging IT delivery model, i.e., cloud computing. Albeit cloud computing-based services have rapidly developed, their security aspects are still at the initial stage of development. In order to preserve cybersecurity in cloud computing, cybersecurity information that will be exchanged within it needs to be identified and discussed. For this purpose, we propose an ontological approach to cybersecurity in cloud computing. We build an ontology for cybersecurity operational information based on actual cybersecurity operations mainly focused on noncloud computing. In order to discuss necessary cybersecurity information in cloud computing, we apply the ontology to cloud computing. Through the discussion, we identify essential changes in cloud computing such as data-asset decoupling and clarify the cybersecurity information required by changes such as data provenance and resource dependency information.
Thomas Millar (Senior Researcher, Analyst & Action Officer, United States Computer Emergency Readiness Team (US-CERT)): An operational model of CIRT processes for improved collaboration and capability development

This presentation will describe our approach to a process model and accompanying domain ontology for cyber security incident response, with potential applications for event management and threat analysis, as well as broader risk management functions such as software assurance. The model I will be presenting differs significantly from similar recent work due to its grounding in real-world CIRT operational processes and decision-making needs.
Luc Dandurand (Senior Scientist, CAT2 - Cyber Defence and Assured Information Sharing NATO C3 Agency): Cyber Defence Data Exchange and Collaboration Infrastructure (CDXI)

This presentation will outline the NATO C3 Agency's work on the high-level requirements for an infrastructure to automate the exchange of various data for Cyber Defence purposes. This data includes both operational information on ongoing incidents as well as supporting data such as lists of vulnerabilities, malware, applications, amongst others. The services provided by this infrastructure is intended to be closely integrated into Cyber Defence applications and will include collaboration mechanisms to assist in the refinement of the data.
Enrico M. Staderini (Western Switzerland University of Applied Science, Switzerland): Remote clinical examination: the key issue of telemedicine

Graduated physicians coming out of the schools of medicine of our universities are very well trained in the basic art of their job: that of clinical examination. Mastering the art of physical examination of the patient, as well as that of history taking, is a must for every physician, indeed. That's how physicians are “made” at the university. When dealing with telemedicine the things worsen. Without the patient at reach, the process of clinical examination is somewhat hampered, or maybe even eased. Although almost any field of medicine has experienced the application of telemedicine, a rigorous clinical “tele”-examination practice is still lacking. And it goes by itself that remote clinical examination and remote history taking are not thought at the medical schools, so that any physician is much more an enthusiast do-it-yourselfer, in telemedicine practice, than a professional. In the opinion of the author it is time to end this attitude which poses a series of security and safety challenges in medical practice, not to mention the hampering the widespread use of telemedicine facilities. Patient-physician interaction in remote clinical examination poses important challenges if the standard clinical examination paradigm (and associated reliability) is to be granted. Communication standards and telebiometrics standards will foster best practices in telemedicine. In this paper the author is proposing a sort of roadmap to enhance collaboration and coordination to promote telemedicine as an integral part of the medical profession based on robust and rigorous methodology, standards and philosophy.
Arturo Serrano (CICESE Research Center, Mexico): Developing a Framework for Health IT standardization

It is particularly relevant the role of standardization in a complex and highly converging context. On the one hand, national ICT regulatory administrations of emerging economies are committed to consolidate and strengthen their standardization processes and policies. On the other hand, the national health institutions acknowledge the need of standards that include information technology elements in the medical practices. The pressing need of improving the quality and coverage of medical services in developing countries, particularly in remote and under-served locations represents an important opportunity to develop a comprehensive standards framework for Health IT. This framework requires a multidisciplinary and collaborative approach and a new focus in the participation of the national standards bodies. We propose in this contribution the incorporation of three elements in this framework which, we believe, are key to improving the Health IT services in developing countries: adoption and usability factors, innovation strategies and sustainable development factors. Our proposal is based in the collaborative work of IT research institutions, governmental and private health institutions and physicians involved in Health IT practices in both urban and rural locations.
Yong-Nyuo Shin (Hanyang Cyber University, Korea): Integrated framework for telebiometric data protection

Remote medical systems are technologies in which medical services are transmitted using computers and data communication technologies, and it defines the medical system that diagnoses and treats patients in remote locations. Devices are used to transmit the patient’s physical information (electrocardiogram, X-rays, voice, etc) to the hospital or doctor, which is then examined by the doctor. The treatment instructions of the doctor based on this diagnosis are then sent from the hospital to the patient to commence treatment. The patient’s physical information is shared not only between the patient and doctor, but also between hospitals. This kind of remote medical system is accompanied with the potential of infringement personal privacy, due to the disclosure of personal and medical information. For this reason, security technologies are required to protect such a system from vulnerabilities, while effectively safeguarding it against external attacks.

To provide stable biometric telemedicine and telehealth services, user authentication and service aspects should be considered. We provide an integrated framework for protection of biometric data and private information in telehealth. We define a model of health services using telebiometrics for user identification and authentication. It identifies the threats in transmitting various sensory data related to human health and provides the countermeasures for secure transmission when applying this integrated framework.
Yoshiaki Isobe (Hitachi, Japan): Telebiometrics Applications

Telebiometrics technologies have started to be applied to various application systems in Japan. For example, Japanese vendors have developed systems that verify identity claims made by individuals based on the unique pattern of veins in their palms and fingers. In order to obtain clear vein images, only specific blood flow patterns (vessels carrying oxygen-free blood to the heart) are considered.

Since 2004, this technology has been deployed in 66,463 ATMs of 289 Japanese bank groups to secure the access to more than two million accounts. Fraudulent withdrawals with fake or stolen ATM cards have decreased since 2005, in which 89% of fraudulent withdrawals come from stolen cards. To authorize a transaction, the customer is required to present to the ATM a banking card, the corresponding PIN and the vascular pattern of palm or finger, which corresponds to a three-factor authentication scheme of possession, knowledge and biometric. The third factor could be used to authorize withdrawals of higher amounts. Vascular patterns are regarded as secure and tamper-proof biometric traits, as they are inside the human body. This large-scale deployment of biometrics in a commercial application has been proved successful and other banks have started to equip their ATMs with biometric recognition capabilities.

This presentation will introduce the vein biometrics technologies, some security technologies with template protection technique for biometrics and those applications. And it will introduce relationships of ITU-T Telebiometrics Recommendations and the telecommunication systems.
Myung-Geun Chun (Chungbuk National University, Korea): Biometric Information Protection Standard in ISO/IEC JTC 1 SC27

Biometric authentication introduces a potential dichotomy between privacy and authentication assurance. On the one hand, biometric characteristics are, supposedly, an unchanging property associated with and distinct to an individual. This binding of the credential to the person provides strong evidence of authenticity. On the other hand, this strong binding also underlies the privacy concerns surrounding the use of biometrics such as unlawful processing of biometric data, and poses challenges on the security of biometric systems to prevent biometric references to become compromised. The usual security paradigm for compromise of an authentication credential – to change the password or issue a new token – is not generally available for biometric authentication since biometric characteristics, being either intrinsic physiological properties or behavioural traits of individuals, are difficult or impossible to change. At most another finger or eye could be enrolled but the choices are usually limited. Therefore, appropriate countermeasures to safeguard the security of a biometric system and the privacy of its data subjects are essential.

Focusing on this issue, ISO/IEC JTC SC27 has been preparing a standard which will provide guidance for the protection of biometric information under various requirements for confidentiality, integrity and renewability/revocability during storage and transfer. The standard also describes the relationship between the biometric reference and other personally identifiable information (PII). The increasing linkage of biometric references with other PII and the sharing of biometric information across legal jurisdictions make it extremely difficult for organizations to assure the protection of biometric information and to achieve compliance with various privacy regulations. Therefore, this standard also provides guidance on requirements on the secure and privacy-compliant management and processing of biometric information and also clarifies the responsibility of the biometric system owner.
George Arnold (Chairman, SAG-S)

The electric grid is one of the most complex and important infrastructures ever created, and is vital to modern quality of life and the economy. Generation of electricity is also a significant source of greenhouse gas emissions. The basic architecture of the grid has not changed much in 100 years, and use of information technology to increase efficiency and reliability has lagged behind other infrastructures such as telecommunication. Modernization of the grid is central to many nations’ efforts to address climate change and improve energy efficiency and reliability. The smart grid represents the integration of information and communications technologies into the existing power system to provide measurement and control needed for increased use of distributed and renewable generation, enabling dynamic management of demand as well as generation, improving reliability, and support for electric vehicles. Introduction of ICT technologies into the grid presents significant new cybersecurity challenges. This presentation will describe efforts led by the National Institute of Standards and Technology to address cybersecurity challenges for the smart grid.


Top - Feedback - Contact Us -  Copyright © ITU 2010 All Rights Reserved
Contact for this page : TSB EDH
Updated : 2010-12-06