[DDoS] DDoS 제품 구성별 특징

2011. 6. 10. 11:46 | Posted by 꿈꾸는코난


< 인라인 구성 >


< 아웃오브패스 구성 >



< 윈스테크넷 SNIPER DDX 구성도 >



< 안랩 TrusGuard DPX >



< 시큐아이 NXG D >



< 아버 네트웍스 PeakFlow >



The Problem:

Of many security issues surrounding the IT, the endpoint security is very important and crucial for regulatory compliance requirements. Traditionally, endpoint security used the client/server model where every end point device (servers, desktops, laptops, etc..) had a client installed on all these devices and centrally managed by the IT security. The “hefty” client will monitor the device, analyze the content, process and act based on the set of rules. This approach was not efficient but there was no better approach to the problem. As the IT moved to virtualized environment and cloud, this problem only became bigger. Not only this approach increased the complexity of endpoint security on virtualized and cloud environments but, also, had an impact on the overall performance. Some of the problems with the client/server approach to endpoint security are

  • We have to install the clients on all the VMs with each client software storing the engine, signatures, etc. locally. This had a significant impact on CPU and RAM usage along with additional storage requirements
  • When updates were done, this approach involved simultaneous download of updates by all the VMs and simultaneous install/update on all VMs. This lead to huge spikes in both network usage and internal resource usage, leading to significant performance hits
  • Already virtualization impacts the performance of the servers (but don’t despair. Virtualization vendors are working hard on reducing this impact and you can see the performance comparison of different hypervisors in this Taneja group Report). If the endpoint security adds # of VMs x endpoint security resource usage load to this virtualized environment, the performance hit is going to be significant
  • A certain amount of complexity is also added on the management layer due to the large number of clients on VMs
  • Add VM sprawl and its impact to the above

Clearly, even though the client/server approach offers reliable endpoint security, it is not an efficient way to do security in virtualized environments. There was definitely a need for a better approach. So far, the virtualization vendors were relying on the third party providers in the ecosystem to fill the gap and third party providers were waiting for the virtualization vendors to offer something better because there is not much they can do with the client/server model.

VMware’s approach to endpoint security:

At VMworld 2010 last week, VMware announced the first step towards having a more efficient endpoint security model. The VMware vShield Endpoint solution for vSphere 4.1 and View 4.5 environments offered library and APIs for integrating partner security appliances that can introspect into file activity at the hypervisor layer. Imagine taking an endpoint security appliance from a vendor, putting it into the virtualized/cloud environment and tapping into their APIs to do all the protection without installing any agent on the virtual machines. This simple and elegant approaches vastly reduces the performance impact on the virtual environment and, may, even offer considerable cost savings in terms of license fees, etc.. Not only that it simplifies the compliance auditing processes making clouds more palatable to enterprise customers with heavy compliance requirements.

vShield Endpoint plugs directly into vSphere and it has the following three components to carry out the protection as mentioned in the above paragraph.

  • Hardened Security Virtual Machines, provided by VMware partners like Trend Micro. This is a highly secured third party appliance with the anti-virus/malware engines, signature and other components needed for the protection. The most important part about separating the anti-virus/malware engine and signatures from the virtual machines is that there is no threat to them even if the VM is compromised completely. This advantage is not available in the client/server approach in the traditional computing world.
  • Driver for virtual machines to offload file events. This is the thin client VMware uses to “interact” with the security appliance provided by the partners
  • VMware Endpoint Security (EPSEC) Loadable Kernel Module (LKM) to link the above two components with the hypervisor

vShield Endpoint monitors the file events on virtual machines through its “thin agents” and notifies the anti-virus/malware engine vial EPSEC, which scans and returns the result. The same approach also supports regularly schedules partial and full scans of VMs. In the event of any exploit/vulnerability/attack, admins can specify the actions through the management tools integrated into vCenter/vCloud Director and these actions will be carried out on the affected virtual machines by vShield Endpoint.

Conclusion:

In my opinion (keep in mind I am not a security guru but someone who only observes them and talk to them), this is a very elegant solution by VMware to tackle this problem. Not only they made it simple and easy, they also help their customers achieve increased performance with their implementation of VMware virtualization/cloud. I am pretty sure other virtualization vendors will soon come up with similar solutions. In fact, when I spoke to Trend Micro, they told me that they will support other platforms with agent-less endpoint security once they offer APIs similar to what VMware is offering. Next week, I will build upon this and talk about how Trend Micro tapped into vShield Endpoint to offer powerful malware protection without “any” impact on the performance.


[Firewall] paloalto network firewall

2010. 11. 19. 11:52 | Posted by 꿈꾸는코난

2007년 초에 paloalto network 사의 App-ID라는 개념 소개 white paper를 리뷰한 적이 있었는데 그때만 하더라도 과연 그러한 제품이 실제적으로 가능할까라는 의문이었다. 개념적으로는 상당히 진보되고 확실한 방법이긴 한데 수많은 어플리케이션을 식별하고 그에 따른 분류가 가능할까라는 부분이 가장 컸었는데 실제 제품이 나오고 적용까지 되는 걸 보면....

App-ID

Legacy port-based firewalls are ineffective at identifying and controlling applications because of their reliance on port and protocol as a means of traffic classification. Most applications are capable of bypassing using a variety of techniques such as tunneling another application, sneaking across port 80, hopping ports or using SSL. The lack of visibility and control means that port-based firewalls are no longer the central control point of the security infrastructure.

In order to restore the firewall as the strategic center of the security infrastructure, Palo Alto Networks developed a traffic classification technology that accurately identifies the applications, irrespective of port, protocol, SSL, or evasive tactic. The result is App-ID™, a patent-pending traffic classification technology that enables administrators to determine exactly which applications are running on their network.

Whereas port-based firewalls use only one mechanism of traffic classification, App-ID goes well beyond any other network security technology available, inspecting all of the traffic passing through the firewall, with one or more of identification techniques – including application protocol detection and decryption, application protocol decoding, application signatures, and heuristic analysis. The application identity is then used as the basis of the security policy.

Now, rather then react to the discovery of a strange application by summarily blocking it, the administrator can take a more balanced and informed approach by learning more about the application and then safely enabling its usage or blocking it based on the security risks. With App-ID, IT can now:

  • Improve network visibility by accurately identifying application traffic irrespective of port and protocol.
  • Enhance security by dictating access rights based upon the actual application traffic as opposed to simply the port and protocol.
  • Increase malware prevention effectiveness by narrowing down the number of unauthorized applications traversing the network

User-ID

As enterprises continue to use Internet- and web-centric applications to aid expansion and increase efficiencies, visibility into what users are doing on the network becomes increasingly important. Dynamic IP addressing across both wired and wireless networks, and remote access by employees and non-employees alike have made the use of IP addresses an ineffective mechanism for monitoring and controlling user activity. Unfortunately, today’s port-based firewalls rely heavily on IP addresses as a means of identifying and controlling user activity.

Palo Alto Networks User-ID technology addresses the lack of visibility into user activity by seamlessly integrating with enterprise directory services (Active Directory, LDAP, eDirectory) to dynamically link an IP address to user and group information. In Citrix and terminal services environments, User-ID associates the individual user with their network activity, enabling IT to deploy granular security policies. Integration with other 3rd party repositories is enabled by an XML API.

With visibility into user activity, enterprises can monitor and control applications and content traversing the network based on the user and group information stored within the user repository. User-ID enables IT to:

  • Regain visibility into user activities relative to the applications in use and the content they may generate.
  • Tighten security posture by implementing policies that ties application usage to specific users and groups, as opposed to simply the IP address.
  • Identify Citrix and Microsoft Terminal Services users and control their respective application usage.

User-ID gives an administrator complete visibility into the application activity at a user level, not just an IP address level and in so doing, addresses a key requirement in regaining control over the applications traversing the network. When used in conjunction with App-ID, and Content-ID technologies, User-ID enables IT organizations to enjoy unmatched policy-based visibility and control over users, applications and content.


Content-ID

Enterprise networks are rife with applications that can evade detection. Common methods include dynamically hopping ports, re-using other ports, emulating other applications or tunneling inside SSL. The use of evasive applications has not gone unnoticed by attackers as they increasingly use these invisible applications to transport threats past the firewall. Content-ID melds a uniform threat signature format, stream-based scanning and a comprehensive URL database with elements of application visibility to detect and block a wide range of threats, control non-work related web surfing, and limit unauthorized file and data transfers.

  • Vulnerability prevention (IPS): Palo Alto Networks offers complete protection from all types of network-born threats including traditional vulnerability exploits as well as a new generation of hybrid and multi-vector threats. The Palo Alto Networks intrusion prevention features have been independently validated to have stellar IPS accuracy (93.4% catch rate) while simultaneously maintaining datasheet performance metrics. The full NSS report can be found here. The solution blocks known and unknown network and application-layer vulnerability exploits, buffer overflows, DoS attacks and port scans from compromising and damaging enterprise information resources. IPS mechanisms include:
    • Protocol decoders and anomaly detection
    • Stateful pattern matching
    • Statistical anomaly detection
    • Heuristic-based analysis
    • Block invalid or malformed packets
    • IP defragmentation and TCP reassembly
    • Custom vulnerability and spyware phone home signatures

    Traffic is normalized to eliminate invalid and malformed packets, while TCP reassembly and IP de-fragmentation is performed to ensure the utmost accuracy and protection despite any attack evasion techniques.

  • Stream-based Virus Scanning: Virus and spyware prevention is performed through stream-based scanning, a technique that begins scanning as soon as the first packets of the file are received as opposed to waiting until the entire file is loaded into memory to begin scanning. This means that performance and latency issues are minimized by receiving, scanning, and sending traffic to its intended destination immediately without having to first buffer and then scan the file. Key antivirus capabilities include:
    • Protection against a wide range of malware such as viruses, including HTML and Javascript viruses, spyware downloads, spyware phone home, Trojans, etc.
    • Inline stream-based detection and prevention of malware embedded within compressed files and web content.
    • Leverages SSL decryption within App-ID to block viruses embedded in SSL traffic.
  • URL Filtering: Complementing the threat prevention and application control capabilities is a fully integrated, on-box URL filtering database consisting of 20 million URLs across 76 categories that enables IT departments to monitor and control employee web surfing activities. The on-box URL database can be augmented to suit the traffic patterns of the local user community with a custom, 1 million URL database. URLs that are not categorized by the local URL database can be pulled into cache from a hosted, 180 million URL database.  In addition to database customization, administrators can create custom URL categories to further tailor the URL controls to suit their specific needs. URL filtering visibility and policy controls can be tied to specific users through the transparent integration with enterprise directory services (Active Directory, LDAP, eDirectory) with additional insight provided through customizable reporting and logging.
  • Data leak prevention: Administrators can implement several different types of data leak prevention policies to reduce the risk associated with unauthorized file and data transfer. The transfer of files can be controlled by looking deep within the payload to identify the file type (as opposed to looking only at the file extension) and allow or block according to the policy. Loss of confidential data such as credit card numbers or SSN can be controlled by detecting data patterns in the application flow and responding according to the policy.

Content-ID takes full advantage of Palo Alto Networks SP3 Architecture to deliver high performance threat prevention without impeding traffic.



[VoIP] Voice Over IP

2010. 9. 2. 17:54 | Posted by 꿈꾸는코난

Voice Over IP

 
The following VoIP protocols are described here:
   
Megaco H.248 Gateway Control Protocol
MGCP Media Gateway Control Protocol
MIME  
RVP over IP Remote Voice Protocol Over IP Specification
SAPv2 Session Announcement Protocol
SDP Session Description Protocol
SGCP Simple Gateway Control Protocol
SIP Session Initiation Protocol
Skinny Skinny Client Control Protocol (SCCP)

Voice-over-IP Overview

Voice-over-IP (VoIP) implementations enables users to carry voice traffic (for example, telephone calls and faxes) over an IP network.

There are 3 main causes for the evolution of the Voice over IP market:

  • Low cost phone calls
  • Add-on services and unified messaging
  • Merging of data/voice infrastructures

A VoIP system consists of a number of different components: Gateway/Media Gateway, Gatekeeper, Call agent, Media Gateway Controller, Signaling Gateway and a Call manager

The Gateway converts media provided in one type of network to the format required for another type of network. For example, a Gateway could terminate bearer channels from a switched circuit network (i.e., DS0s) and media streams from a packet network (e.g., RTP streams in an IP network). This gateway may be capable of processing audio, video and T.120 alone or in any combination, and is capable of full duplex media translations. The Gateway may also play audio/video messages and performs other IVR functions, or may perform media conferencing.

In VoIP, the digital signal processor (DSP) segments the voice signal into frames and stores them in voice packets. These voice packets are transported using IP in compliance with one of the specifications for transmitting multimedia (voice, video, fax and data) across a network: H.323 (ITU), MGCP (level 3,Bellcore, Cisco, Nortel), MEGACO/H.GCP (IETF), SIP (IETF), T.38 (ITU), SIGTRAN (IETF), Skinny (Cisco) etc.

Coders are used for efficient bandwidth utilization. Different coding techniques for telephony and voice packet are standardized by the ITU-T in its G-series recommendations: G.723.1, G.729, G.729A etc.

The coder-decoder compression schemes (CODECs) are enabled for both ends of the connection and the conversation proceeds using Real-Time Transport Protocol/User Datagram Protocol/Internet Protocol (RTP/UDP/IP) as the protocol stack.


Quality of Service
A number of advanced methods are used to overcome the hostile environment of the IP net and to provide an acceptable Quality of Service. Example of these methods are: delay, jitter, echo, congestion, packet loss, and missordered packets arrival. As VoIP is a delay-sensitive application, a well-engineered, end-to-end network is necessary to use VoIP successfully. The Mean Opinion Score is one of the most important parameters that determine the QoS.

There are several methods and sophisticated algorithms developed to evaluate the QoS: PSQM (ITU P.861), PAMS (BT) and PESQ.Each CODEC provides a certain quality of service. The quality of transmitted speech is a subjective response of the listener (human or artificial means). A common benchmark used to determine the quality of sound produced by specific CODECs is the mean opinion score (MOS). With MOS, a wide range of listeners judge the quality of a voice sample (corresponding to a particular CODEC) on a scale of 1 (bad) to 5 (excellent).

Services
The following are examples of services provided by a Voice over IP network according to market requirements:

Phone to phone, PC to phone, phone to PC, fax to e-mail, e-mail to fax, fax to fax, voice to e-mail, IP Phone, transparent CCS (TCCS), toll free number (1-800), class services, call center applications, VPN, Unified Messaging, Wireless Connectivity, IN Applications using SS7, IP PABX and soft switch implementations.

 

Megaco (H.248)

Internet draft: draft-ietf-megaco-merged-00.txt

The Media Gateway Control Protocol, (Megaco) is a result of joint efforts of the IETF and the ITU-T Study Group 16. The protocol definition of this protocol is common text with ITU-T Recommendation H.248.

The Megaco protocol is used between elements of a physically decomposed multimedia gateway. There are no functional differences from a system view between a decomposed gateway, with distributed sub-components potentially on more than one physical device, and a monolithic gateway such as described in H.246. This protocol creates a general framework suitable for gateways, multipoint control units and interactive voice response units (IVRs).

Packet network interfaces may include IP, ATM or possibly others. The interfaces support a variety of SCN signalling systems, including tone signalling, ISDN, ISUP, QSIG and GSM. National variants of these signalling systems are supported where applicable.

All messages are in the format of ASN.1 text messages.

Interested in more details about testing this protocol?

 

MGCP

RFC: 2705 ftp://ftp.isi.edu/in-notes/rfc2705.txt
MGCP

Media Gateway Control Protocol (MGCP) is used for controlling telephony gateways from external call control elements called media gateway controllers or call agents. A telephony gateway is a network element that provides conversion between the audio signals carried on telephone circuits and data packets carried over the Internet or over other packet networks.

MGCP assumes a call control architecture where the call control intelligence is outside the gateways and handled by external call control elements. The MGCP assumes that these call control elements, or Call Agents, will synchronize with each other to send coherent commands to the gateways under their control. MGCP is, in essence, a master/slave protocol, where the gateways are expected to execute commands sent by the Call Agents.

The MGCP implements the media gateway control interface as a set of transactions. The transactions are composed of a command and a mandatory response. There are eight types of commands:

MGCP Commands

MGC --> MG CreateConnection: Creates a connection between two endpoints; uses SDP to define the receive capabilities of the paricipating endpoints.
MGC --> MG ModifyConnection: Modifies the properties of a connection; has nearly the same parameters as the CreateConnection command.
MGC <--> MG DeleteConnection: Terminates a connection and collects statistics on the execution of the connection.
MGC --> MG NotificationRequest: Requests the media gateway to send notifications on the occurrence of specified events in an endpoint.
MGC <-- MG Notify: Informs the media gateway controller when observed events occur.
MGC --> MG AuditEndpoint: Determines the status of an endpoint.
MGC --> MG AuditConnection: Retrieves the parameters related to a connection.
MGC <-- MG RestartInProgress: Signals that an endpoint or group of endpoints is take in or out of service.
MGC=Media Gateway Controller
MG=Media Gateway
  • CreateConnection.
  • ModifyConnection.
  • DeleteConnection.
  • NotificationRequest.
  • Notify.
  • AuditEndpoint.
  • AuditConnection.
  • RestartInProgress.

The first four commands are sent by the Call Agent to a gateway. The Notify command is sent by the gateway to the Call Agent. The gateway may also send a DeleteConnection. The Call Agent may send either of the Audit commands to the gateway. The Gateway may send a RestartInProgress command to the Call Agent.

All commands are composed of a command header, optionally followed by a session description. All responses are composed of a response header, optionally followed by a session description. Headers and session descriptions are encoded as a set of text lines, separated by a carriage return and line feed character (or, optionally, a single line-feed character). The headers are separated from the session description by an empty line.

MGCP uses a transaction identifier to correlate commands and responses. Transaction identifiers have values between 1 and 999999999. An MGCP entity cannot reuse a transaction identifier sooner than 3 minutes after completion of the previous command in which the identifier was used.
The command header is composed of:

  • A command line, identifying the requested action or verb, the transaction identifier, the endpoint towards which the action is requested, and the MGCP protocol version,
  • A set of parameter lines, composed of a parameter name followed by a parameter value.

The command line is composed of:

  • Name of the requested verb.
  • Transaction identifier correlates commands and responses. Values may be between 1 and 999999999. An MGCP entity cannot reuse a transaction identifier sooner than 3 minutes after completion of the previous command in which the identifier was used.
  • Name of the endpoint that should execute the command (in notifications, the name of the endpoint that is issuing the notification).
  • Protocol version.

These four items are encoded as strings of printable ASCII characters, separated by white spaces, i.e., the ASCII space (0x20) or tabulation (0x09) characters. It is recommended to use exactly one ASCII space separator.

Interested in more details about testing this protocol?

 

MIME

http://www.rfc-editor.org/rfcsearch.html RFC 2045 - 2049

This set of standards, collectively called the Multipurpose Internet Mail Extensions, or MIME, redefine the format of messages to allow for textual message bodies in character sets other than US-ASCII, an extensible set of different formats for non-textual message bodies, multi-part message bodies, and textual header information in character sets other than US-ASCII.

The initial standard in this set, RFC 2045, specifies the various headers used to describe the structure of MIME messages. RFC 2046 defines the general structure of the MIME media typing system and defines an initial set of media types. The third standard, RFC 2047, describes extensions to RFC 822 to allow non-US-ASCII text data in Internet mail header fields. The fourth standard, RFC 2048, specifies various IANA registration procedures for MIME-related facilities. The fifth and final standard, RFC 2049, describes MIME conformance criteria as well as providing some illustrative examples of MIME message formats, acknowledgements, and the bibliography.

The first standard in this set, RFC 2045, defines a number of header fields, including Content-Type. The Content-Type field is used to specify the nature of the data in the body of a MIME entity, by giving media type and subtype identifiers, and by providing auxiliary information that may be required for certain media types. After the type and subtype names, the remainder of the header field is simply a set of parameters, specified in an attribute/value notation. The ordering of parameters is not significant.

In general, the top-level media type is used to declare the general type of data, while the subtype specifies a specific format for that type of data. Thus, a media type of "image/xyz" is enough to tell a user agent that the data is an image, even if the user agent has no knowledge of the specific image format "xyz". Such information can be used, for example, to decide whether or not to show a user the raw data from an unrecognized subtype -- such an action might be reasonable for unrecognized subtypes of "text", but not for unrecognized subtypes of "image" or "audio". For this reason, registered subtypes of "text", "image", "audio", and "video" should not contain embedded information that is really of a different type.

Such compound formats should be represented using the "multipart" or "application" types.

Parameters are modifiers of the media subtype, and as such do not fundamentally affect the nature of the content. The set of meaningful parameters depends on the media type and subtype. Most parameters are associated with a single specific subtype. However, a given top-level media type may define parameters which are applicable to any subtype of that type. Parameters may be required by their defining media type or subtype or they may be optional. MIME implementations must also ignore any parameters whose names they do not recognize.

MIME's Content-Type header field and media type mechanism has been carefully designed to be extensible, and it is expected that the set of media type/subtype pairs and their associated parameters will grow significantly over time. Several other MIME facilities, such as transfer encodings and "message/external-body" access types, are likely to have new values defined over time. In order to ensure that the set of such values is developed in an orderly, well-specified, and public manner, MIME sets up a registration process which uses the Internet Assigned Numbers Authority (IANA) as a central registry for MIME's various areas of extensibility. The registration process for these areas is described in RFC 2048.

Interested in more details about testing this protocol?

 

RVP over IP

RVP Over IP Specification, MCK Communications (Proprietary)

Remote Voice Protocol (RVP) is MCK Communications' protocol for transporting digital telephony sessions over packet or circuit based data networks. The protocol is used primarily in MCK's Extender product family, which extends PBX services over Wide Area Networks (WANs). RVP provides facilities for connection establishment and configuration between a client (or remote station set) device and a server (or phone switch) device.

RVP/IP uses TCP to transport signalling and control data, and UDP to transport voice data.

Signalling and Control Packets

Control and signalling packets carried over TCP are encapsulated using the following format, a header followed by signalling or control messages:

1 byte
1 byte
 
Length
Protocol code
RVP/IP messages
RVP over IP packet structure
Length
A one byte field containing the length of the header (protocol code and the entire RVP/IP message). The length field allows recognition of message boundaries in a continuous TCP data stream.

Protocol code
Identifies the RVP/IP protocol:

35

RVP/IP control messages (see RVP Control Protocol).

36

RVP/IP signalling data (see RVP Signalling Operations).

RVP/IP messages
RVP/IP messages include RVP Control Protocol (RVPCP) and RVP Signalling Operations described below.

RVP Control Protocol (RVPCP)

RVP Control Protocol is for control messages that configure and maintain the data link between the client and the server. The control protocol was originally developed for point-to-point data applications; most of its functionality is unnecessary when using TCP/IP. During an RVP/IP session, only one class of RVP/IP control message are exchanged: RVPCP ADD VOICE (operation code 12) packet, used to send the UDP port used by the client (for subsequent voice data packets) to the server. This message always takes a single parameter of type RVPCP UDP PORT (type code 9), which always has a length of exactly two and a value that is the two-byte UDP port to which voice data packets should be addressed. The server responds with a packet containing the code RVPCP ADD VOICE ACK (operation code 13) which contains exactly one parameter, the server's voice UDP port. If RVP/IP is operating in "dynamic voice" mode, this exchange must be repeated whenever the voice channel needs to be reestablished, i.e., whenever the phone goes off-hook.

The structure of the control messages is described below:

2 bytes
2 bytes
 
Operation code
Parameter count
Parameters
RVP over IP control message structure

Operation code
The operation code defines the class of RVP/IP control messages Possible classes are:

12

RVPCP ADD VOICE

13

RVPCP ADD VOICE ACK

Parameter count
The parameter count equals exactly one parameter.

Parameters
Parameters of all control messages are passed as Type, Length and Value (TLV) structures as described below:

2 bytes
2 bytes
 
Type
Length
Value...
RVP over IP control message structure

Type
RVPCP UDP PORT (or type code 9).

Length
The number of bytes in the value field.

Value
The UDP port number.

RVP Signalling Operations


The structure of RVP signalling data (protocol type 36) is described below:

7 8 8 8
Packet Length Protocol Message Length Data
RVP over IP signalling message structure

RVP signalling data packets always begin with a length byte immediately after the RVP/IP encapsulation header. The packets contain two classes of data, either raw digital telephone signalling packets or high-level RVP session commands. Session commands are differentiated from raw signalling data by adding an offset of 130 in the "Message Length" field. All raw signalling data has a true length field of less than or equal to 128. The true length of a session command message is calculated by subtracting 130 from the length field.

For all session commands, the Command Code (one-byte) follows the message length field. Bit seven of the command code is considered the "ACK" bit. All other bits in this field are part of the command code itself.

Voice Data Packets

The structure of voice data packets, carried over UDP datagrams, is described below:

7
 
Protocol
RVP/IP Voice Data...
RVP over IP Voice packet structure

Protocol
The protocol code is always 37 for RVP/IP voice data packets.

RVP/IP voice data
A single voice packet is carried in each UDP datagram.

Interested in more details about testing this protocol?

 

SAPv2

Internet draft: http://search.ietf.org/internet-drafts/draft-ietf-mmusic-sap-v2-04.txt 

SAP is an announcement protocol that is used by session directory clients. A SAP announcer periodically multicasts an announcement packet to a well-known multicast address and port. The announcement is multicast with the same scope as the session it is announcing, ensuring that the recipients of the announcement can also be potential recipients of the session the announcement describes (bandwidth and other such constraints permitting). This is also important for the scalability of the protocol, as it keeps local session announcements local.

The following is the format of the SAP data packet.

V=1

A

R

T

E

C

Auth len

Msg id hash

Originating source

Optional Authentication Data

Optional timeout

Optional payload type

0

Payload

SAP data packet structure

V: Version Number
The version number field is three bits and MUST be set to 1.

A: Address Type
The Address type field is one bit. It can have a value of 0 or 1:
0          The originating source field contains a 32-bit IPv4 address. 
1          The originating source contains a 128-bit IPv6 address.

R: Reserved
SAP announcers set this to 0. SAP listeners ignore the contents of this field.

T: Message Type
The Message Type field is one bit. It can have a value of 0 or 1:
0          Session announcement packet
1          Session deletion packet.

E: Encryption Bit
The encryption bit may be 0 or 1.
1          The payload of the SAP packet is encrypted and the timeout field must be added to the packet header.
0          The packet is not encrypted and the timeout must not be present. 

C: Compressed Bit
If the compressed bit is set to 1, the payload is compressed.

Authentication Length
An 8 bit unsigned quantity giving the number of 32 bit words, following the main SAP header, that contain authentication data. If it is zero, no authentication header is present.

Message Identifier Hash
A 16-bit quantity that, used in combination with the originating source, provides a globally unique identifier indicating the precise version of this announcement.

Originating Source
This field contains the IP address of the original source of the message. This is an IPv4 address if the A field is set to zero; otherwise, it is an IPv6 address. The address is stored in network byte order.

Timeout
When the session payload is encrypted, the detailed timing fields in the payload are not available to listeners not trusted with the decryption key. Under such circumstances, the header includes an additional 32-bit timestamp field stating when the session should be timed out. The value is an unsigned quantity giving the NTP time in seconds at which time the session is timed out. It is in network byte order.

Payload Type
The payload type field is a MIME content type specifier, describing the format of the payload. This is a variable length ASCII text string, followed by a single zero byte (ASCII NUL).

Payload
The Payload field includes various sub fields:

Version number (V)
The version number of the authentication format is 1.

Padding Bit (P)
If necessary, the authentication data is padded to be a multiple of 32 bits and the padding bit is set. In this case the last byte of the authentication data contains the number of padding bytes (including the last byte) that must be discarded.

Authentication Type (Auth)
The authentication type is a 4 bit encoded field that denotes the authentication infrastructure the sender expects the recipients to use to check the authenticity and integrity of the information. This defines the format of the authentication sub-header and can take the values: 0=PGP format, 1=CMS format. All other values are undefined.

Interested in more details about testing this protocol?

 

SDP

RFC 2327 ftp://ftp.isi.edu/in-notes/rfc2327.txt

The Session Description Protocol (SDP) describes multimedia sessions for the purpose of session announcement, session invitation and other forms of multimedia session initiation.

On Internet Multicast backbone (Mbone) a session directory tool is used to advertise multimedia conferences and communicate the conference addresses and conference tool-specific information necessary for participation. The SDP does this. It communicates the existence of a session and conveys sufficient information to enable participation in the session. Many of the SDP messages are sent by periodically multicasting an announcement packet to a well-known multicast address and port using SAP (session announcement protocol). These messages are UDP packets with a SAP header and a text payload. The text payload is the SDP session description. Messages can also be sent using email or the WWW (World Wide Web).

The SDP text messages include:

  • Session name and purpose
  • Time the session is active
  • Media comprising the session
  • Information to receive the media (address etc.)

SDP messages are text messages using the ISO 10646 character set in UTF-8 encoding.

Interested in more details about testing this protocol?

 

SIP

For information on how to simulate thousands of SIP calls  


RFC 2543 ftp://ftp.isi.edu/in-notes/rfc2543.txt

Session Initiation Protocol (SIP) is a application layer control simple signalling protocol for VoIP implementations using the Redirect Mode.

SIP is a textual client-server base protocol and provides the necessary protocol mechanisms so that the end user systems and proxy servers can provide different services:

  1. Call forwarding in several scenarios: no answer, busy , unconditional, address manipulations (as 700, 800 , 900- type calls).
  2. Callee and calling number identification
  3. Personal mobility
  4. Caller and callee authentication
  5. Invitations to multicast conference
  6. Basic Automatic Call Distribution (ACD)

SIP addresses (URL) can be embedded in Web pages and therefore can be integrated as part of powerful implementations (Click to talk, for example).

SIP using simple protocol structure, provides the market with fast operation, flexibility, scalability and multiservice support.

SIP provides its own reliability mechanism. SIP creates, modifies and terminates sessions with one or more participants. These sessions include Internet multimedia conferences, Internet telephone calls and multimedia distribution. Members in a session can communicate using multicast or using a mesh of unicast relations, or a combination of these. SIP invitations used to create sessions carry session descriptions which allow participants to agree on a set of compatible media types. It supports user mobility by proxying and redirecting requests to the user's current location. Users can register their current location. SIP is not tied to any particular conference control protocol. It is designed to be independent of the lower-layer transport protocol and can be extended with additional capabilities.

SIP transparently supports name mapping and redirection services, allowing the implementation of ISDN and Intelligent Network telephony subscriber services. These facilities also enable personal mobility which is based on the use of a unique personal identity

SIP supports five facets of establishing and terminating multimedia communications:

User location
User capabilities
User availability
Call setup
Call handling.

SIP can also initiate multi-party calls using a multipoint control unit (MCU) or fully-meshed interconnection instead of multicast. Internet telephony gateways that connect Public Switched Telephone Network (PSTN) parties can also use SIP to set up calls between them.

SIP is designed as part of the overall IETF multimedia data and control architecture currently incorporating protocols such as RSVP, RTP RTSP, SAP and SDP. However, the functionality and operation of SIP does not depend on any of these protocols.

SIP can also be used in conjunction with other call setup and signalling protocols. In that mode, an end system uses SIP exchanges to determine the appropriate end system address and protocol from a given address that is protocol-independent. For example, SIP could be used to determine that the party can be reached using H.323 to find the H.245 gateway and user address and then use H.225.0 to establish the call.

SIP Operation

Sip works as follows:
Callers and callees are identified by SIP addresses. When making a SIP call, a caller first locates the appropriate server and then sends a SIP request. The most common SIP operation is the invitation. Instead of directly reaching the intended callee, a SIP request may be redirected or may trigger a chain of new SIP requests by proxies. Users can register their location(s) with SIP servers.

SIP messages can be transmitted either over TCP or UDP
SIP messages are text based and use the ISO 10646 character set in UTF-8 encoding. Lines must be terminated with CRLF. Much of the message syntax and header field are similar to HTTP. Messages can be request messages or response messages.

Protocol header structure.

The protocol is composed of a start line, message header, an empty line and an optional message body.

Request Messages

The format of the Request packet header is shown in the following illustration:

Method Request URI SIP version
SIP request packet structure

Method
The method to be performed on the resource. Possible methods are Invite, Ack, Options, Bye, Cancel, Register

Methods  
Command Function
INVITE Initiate Call
ACK Confirm final response
BYE Terminate and transfer call
CANCEL Cancel searches and "ringing"
OPTIONS Features support by other side
REGISTER Register with location service

Request-URI
A SIP URL or a general Uniform Resource Identifier, this is the user or service to which this request is being addressed.

SIP version
The SIP version being used; this should be version 2.0

Response Message

The format of the Response message header is shown in the following illustration:

SIP version Status code Reason phrase
SIP response packet structure

Response Codes  
Response Code Prefix Function
1xx Searching, ringing, queuing
2xx Success
3xx Fowarding
4xx Client mistakes
5xx Server failures
6xx Busy, refuse, not available anywhere

SIP version
The SIP version being used.

Status-code
A 3-digit integer result code of the attempt to understand and satisfy the request.

Reason-phrase
A textual description of the status code.

Typical SIP Calls

Interested in more details about testing this protocol?

 

SGCP

IETF draft: http://www.ietf.org/internet-drafts/draft-huitema-sgcp-v1-02.txt

Simple Gateway Control Protocol (SGCP) is used to control telephony gateways from external call control elements. A telephony gateway is a network element that provides conversion between the audio signals carried on telephone circuits and data packets carried over the Internet or over other packet networks.

The SGCP assumes a call control architecture where the call control intelligence is outside the gateways and is handled by external call control elements. The SGCP assumes that these call control elements, or Call Agents, will synchronize with each other to send coherent commands to the gateways under their control.

The SGCP implements the simple gateway control interface as a set of transactions. The transactions are composed of a command and a mandatory response. There are five types of commands:

  • CreateConnection.
  • ModifyConnection.
  • DeleteConnection.
  • NotificationRequest.
  • Notify.

The first four commands are sent by the Call Agent to a gateway. The Notify command is sent by the gateway to the Call Agent. The gateway may also send a DeleteConnection.

All commands are composed of a Command header, optionally followed by a session description. All responses are composed of a Response header, optionally followed by a session description. Headers and session descriptions are encoded as a set of text lines, separated by a line feed character. The headers are separated from the session description by an empty line.

The command header is composed of:

  • Command line.
  • A set of parameter lines, composed of a parameter name followed by a parameter value.

The command line is composed of:

  • Name of the requested verb.
  • Transaction identifier, correlates commands and responses. Transaction identifiers may have values between 1 and 999999999 and transaction identifiers are not reused sooner than 3 minutes after completion of the previous command in which the identifier was used.
  • Name of the endpoint that should execute the command (in notifications, the name of the endpoint that is issuing the notification).
  • Protocol version.

These four items are encoded as strings of printable ASCII characters, separated by white spaces, i.e. the ASCII space (0x20) or tabulation (0x09) characters. It is recommended to use exactly one ASCII space separator.

Interested in more details about testing this protocol?

 

Skinny

Cisco protocol

Skinny Client Control Protocol (SCCP). Telephony systems are moving to a common wiring plant. The end station of a LAN or IP- based PBX must be simple to use, familiar and relatively cheap. The H.323 recommendations are quite an expensive system. An H.323 proxy can be used to communicate with the Skinny Client using the SCCP. In such a case the telephone is a skinny client over IP, in the context of H.323. A proxy is used for the H.225 and H.245 signalling.

The skinny client (i.e. an Ethernet Phone) uses TCP/IP to transmit and receive calls and RTP/UDP/IP to/from a Skinny Client or H.323 terminal for audio. Skinny messages are carried above TCP and use port 2000.

The messages consist of Station message ID messages.

They can be of the following types:

Code

Station Message ID Message

0x0000

Keep Alive Message

0x0001

Station Register Message

0x0002

Station IP Port Message

0x0003

Station Key Pad Button Message

0x0004

Station Enbloc Call Message

0x0005

Station Stimulus Message

0x0006

Station Off Hook Message

0x0007

Station On Hook Message

0x0008

Station Hook Flash Message

0x0009

Station Forward Status Request Message

0x11

Station Media Port List Message

0x000A

Station Speed Dial Status Request Message

0x000B

Station Line Status Request Message

0x000C

Station Configuration Status Request Message

0x000D

Station Time Date Request Message

0x000E

Station Button Template Request Message

0x000F

Station Version Request Message

0x0010

Station Capabilities Response Message

0x0012

Station Server Request Message

0x0020

Station Alarm Message

0x0021

Station Multicast Media Reception Ack Message

0x0024

Station Off Hook With Calling Party Number Message

0x22

Station Open Receive Channel Ack Message

0x23

Station Connection Statistics Response Message

0x25

Station Soft Key Template Request Message

0x26

Station Soft Key Set Request Message

0x27

Station Soft Key Event Message

0x28

Station Unregister Message

0x0081

Station Keep Alive Message

0x0082

Station Start Tone Message

0x0083

Station Stop Tone Message

0x0085

Station Set Ringer Message

0x0086

Station Set Lamp Message

0x0087

Station Set Hook Flash Detect Message

0x0088

Station Set Speaker Mode Message

0x0089

Station Set Microphone Mode Message

0x008A

Station Start Media Transmission

0x008B

Station Stop Media Transmission

0x008F

Station Call Information Message

0x009D

Station Register Reject Message

0x009F

Station Reset Message

0x0090

Station Forward Status Message

0x0091

Station Speed Dial Status Message

0x0092

Station Line Status Message

0x0093

Station Configuration Status Message

0x0094

Station Define Time & Date Message

0x0095

Station Start Session Transmission Message

0x0096

Station Stop Session Transmission Message

0x0097

Station Button Template Message

0x0098

Station Version Message

0x0099

Station Display Text Message

0x009A

Station Clear Display Message

0x009B

Station Capabilities Request Message

0x009C

Station Enunciator Command Message

0x009E

Station Server Respond Message

0x0101

Station Start Multicast Media Reception Message

0x0102

Station Start Multicast Media Transmission Message

0x0103

Station Stop Multicast Media Reception Message

0x0104

Station Stop Multicast Media Transmission Message

0x105

Station Open Receive Channel Message

0x0106

Station Close Receive Channel Message

0x107

Station Connection Statistics Request Message

0x0108

Station Soft Key Template Respond Message

0x109

Station Soft Key Set Respond Message

0x0110

Station Select Soft Keys Message

0x0111

Station Call State Message

0x0112

Station Display Prompt Message

0x0113

Station Clear Prompt Message

0x0114

Station Display Notify Message

0x0115

Station Clear Notify Message

0x0116

Station Activate Call Plane Message

0x0117

Station Deactivate Call Plane Message

0x118

Station Unregister Ack Message


Interested in more details about testing this protocol?


What actually is VMsafe and the VMsafe API?


By Michael Haines, Sr. vCloud Architect (Security)

Today, security vendors, such as Trend Micro, McAfee and others are now entering the virtualization market and are looking for ways to develop and integrate their existing solutions (antivirus, personal firewall, intrusion detection, intrusion prevention, anti-spam, URL filtering, and etc…) to VMware's ESX Server while trying to differentiate themselves from the competition.

So, I am sure you have heard a great deal about VMsafe and in particular what the VMsafe API is, right? This word 'VMSafe API' gets bounded around way too often for my liking! So what is VMsafe and the VMSafe API?

When I think of VMsafe, I think of this as more of a partner ecosystem program delivered by VMware. That is to say, what we have created and offer as part of this ecosystem program are three sets of distinct Application Programming Interfaces (APIs) that can be used by ISVs and developers to develop and build security applications and solutions for the virtual environment. I might add this is not for the faint hearted! These APIs are split into three main areas:

- vCompute (CPU and Memory) API
- vNetwork Appliance (DVFilter) API
- VDDK API (for disk block inspection)


The vCompute CPU and Memory API.

So what does the vCompute CPU and Memory Inspection API do? At its most basic form, this API includes features that you can use for developing security applications that inspect memory access and CPU states before any code is actually executed.


The vNetwork Appliance (DVFilter) API

So what does the vNetwork Appliance (DVFilter) API do? This API enables you to provide a solution to protect network packet streams. With the DVFilter you can create network packet filters that you insert into the virtual packet stream. This network packet filter is inserted between the vNIC and virtual switch (vSwitch). There are one of two possible agents that can be used. These agents are referred to as the fast-path agent and slow-path agent, which make up the "filter". I’ll write more on the fast-path and slow-path agents in a future blog. One of the key messages here is that the vNetwork Appliance APIs are not just for security, we envision a lot more use cases moving forward. In fact, you may not be aware of this, but Lab Manager was the first product to use DVFilter.


The VDDK API

So what does the VDDK API do? The Virtual Disk Development Kit is a collection of C libraries, code samples, utilities, and documentation that enable a developer who is creating applications to manage virtual storage. Yes, it’s an API and Software Development Kit (SDK). The Virtual Disk Development Kit includes the Virtual Disk API library functions, VMware disk utilities (which include the disk mount and virtual disk manager) and documentation. The primary audience for VDDK are ISVs who develop, for example, anti-virus security products.

So, how does one get access to the VMsafe partner ecosystem program? Well, firstly this program itself is controlled in terms of which partners can get to it and use it. Today, only one API (VDDK) that is part of the VMsafe program is a public API. That is to say that the vCompute and vNetwork are not public APIs. As I mentioned earlier, these APIs are not end user APIs but rather are intended for security partners. For more information on these security partner APIs, go to  VMware's Advanced Developer Portal.  For more information on VMSafe in general, please visit us here


NIST'S Policy on Hash Functions


March 15, 2006: The SHA-2 family of hash functions (i.e., SHA-224, SHA-256, SHA-384 and SHA-512) may be used by Federal agencies for all applications using secure hash algorithms. Federal agencies should stop using SHA-1 for digital signatures, digital time stamping and other applications that require collision resistance as soon as practical, and must use the SHA-2 family of hash functions for these applications after 2010. After 2010, Federal agencies may use SHA-1 only for the following applications: hash-based message authentication codes (HMACs); key derivation functions (KDFs); and random number generators (RNGs). Regardless of use, NIST encourages application and protocol designers to use the SHA-2 family of hash functions for all new applications and protocols.

TLS specifics of hash functions


  1. MAC constructions

    A number of operations in the TLS record and handshake layer require a keyed Message Authentication Code (MAC) to protect message integrity or to construct key derivation functions.

    For TLS 1.0 and 1.1, the construction used is known as HMAC; TLS 1.2 still use HMAC, but it also decalares that "Other cipher suites MAY define their own MAC constructions, if needed."


  2. HMAC at handshaking

    HMAC can be used with a variety of different hash algorithms. However, TLS 1.0 and TLS 1.1 use it in the handshaking with two different algorithms, MD5(HMAC_MD5) and SHA-1(HMAC-SHA). Additionla hash algorithm can be defined by cipher suites and used to protect record data, but MD5 and SHA-1 are hard coded into the description of the handshaking for TLS 1.0 and TLS 1.1.

    TLS 1.2 move away from the hard coded MD5 and SHA-1, SHA-256 is the default hash function for all cipher suites defined in TLS 1.2, TLS 1.1, TLS 1.0 when TLS 1.2 is negotiated. TLS 1.2 also declares that "New cipher suites MUST explicitly specify a PRF and, in general, SHOULD use the TLS PRF with SHA-256 or a stronger standard hash function", which means that the hash functions used at handshakeing should be SHA-256 at least.


  3. HMAC at protecting record

    For the HMAC operations used to protect record data, the hash funtion is defined by cipher suites. For example, the HMAC's hash function of cipher suite TLS_RSA_WITH_NULL_MD5 is MD5.

    TLS 1.0 and TLS 1.1 define three hash functions for HMAC, they are:

    • null
    • MD5
    • SHA1

    From TLS 1.2, new cipher suites may define their own MAC constructions except the default HMAC. TLS 1.2 defines five MAC algorithms, from the literal, it is straight forward to know the hash function used.

    • null
    • hmac_md5
    • hmac_sha1
    • hmac_sha256
    • hmac_sha384
    • hmac_sha512


  4. Pseudo-Random Function

    Pseudo-random function takes a key rule in TLS handshaking, it is used to calculate the master secret, calculate session keys, and verify the just negotiated algorithms via Finished message. TLS specifications define PRF based on HMAC.

    For TLS 1.0 and TLS 1.1, the PRF is created by splitting the secret into two halves and using one half to generate data with P_MD5 and the other half to generate data with P_SHA-1, then exclusive-ORing the outputs of these two expansion functions together.

    PRF(secret, label, seed) = P_MD5(S1, label + seed) XOR P_SHA-1(S2, label + seed);

    TLS 1.2 defines a PRF based on HMAC as TLS 1.0/1.1, except that the hash algorithm used if SHA-256, "This PRF with the SHA-256 hash function is used for all cipher suites defined in this document and in TLS documents published prior to this document when TLS 1.2 is negotiated. New cipher suites MUST explicitly specify a PRF and, in general, SHOULD use the TLS PRF with SHA-256 or a stronger standard hash function."

    Unlike TLS 1.0/1.1, the PRF of TLS 1.2 does not require to split the secret any more, only one hash function used:

    PRF(secret, label, seed) = P_<hash>(secret, label + seed)


  5. Hash function at ServerKeyExchange

    In the handshakeing message, ServerKeyExchage, for some exchande method, such as RSA, diffie_hellman, ec_diffie_hellman, ecdsa, etc., needs a so-called "signature" to protect the exchanged parameters.

    TLS 1.0 and TLS 1.1 use SHA-1 ( or with MD5 at the same time) to generate the digest for the "signature". While for TLS 1.2, the hash function may be other than SHA-1, it is varied with the ServerKeyExchange message context, such as "signature algorithm" extension, the server end-entity certificate.


  6. Server Certificates

    In TLS 1.0/1.1, there is no way for client to indicate the server what kind of server certificates it would accept. TLS 1.2 defines a extension, signature_algorithms, to indicate to the server which signature/hash algorithm pairs may be used in digital signatures. The hash algorithm could be one of:

    • none
    • md5
    • sha1
    • sha224
    • sha256
    • sha384
    • sha512


  7. Client Certificates

    In TLS 1.0/1.1, a TLS server could request a serial of types of client certificate, but the "type" here refer to the "signature" algorithm, which does not include the hash algorithm the certificate should be signed with. So a certificate signed with a stonger signature algorithm, such as RSA2048, but with a weak hash funtion, such as MD5, would meet the requirements. That's not enough.

    TLS 1.2 extends the CertificateRequest handshaking message with a addtional field, "supported_signature_algorithms", to indicate to the client which signature/hash algorithm pairs may be used in digital signatures. The hash algorithm could be one of:

    • none
    • md5
    • sha1
    • sha224
    • sha256
    • sha384
    • sha512



What the FIPS 140-2 Concern

In the last update of "Implementation Guidance for FIPSPUB 140-2", "The KDF in TLS is allowed only for the purpose of establishing keying material (in particular, the master secret) for a TLS session with the following restrictions, even though the use of the SHA-1 and MD5 hash functions are not consistent with in Table 1 or Table 2 of SP 800-56A: "

  1. The use of MD5 is allowed in the TLS protocol only; MD5 shall not be used as a general hash function.
  2. The maximum number of blocks of secret keying material that can be produced by repeated use of the pseudorandom function during a single call to the TLS key derivation function shall be 2^32-1.


NIST's Policy Compliant profile for TLS

The NIST's policy on hash functions could be split into four principle. We discuss the profile according to the principles.

  • Principle 1: The SHA-2 family of hash functions (i.e., SHA-224, SHA-256, SHA-384 and SHA-512) may be used by Federal agencies for all applications using secure hash algorithms.

    MD5 is not a FIPS approved hash functions, so first of all, the profile needs to disable all cipher suites with the MACAlgorith of MD5.

    • TLS_RSA_WITH_NULL_MD5
    • TLS_RSA_EXPORT_WITH_RC4_40_MD5
    • TLS_RSA_WITH_RC4_128_MD5
    • TLS_RSA_EXPORT_WITH_RC2_CBC_40_MD5
    • TLS_DH_anon_EXPORT_WITH_RC4_40_MD5
    • TLS_DH_anon_WITH_RC4_128_MD5
    • TLS_KRB5_WITH_DES_CBC_MD5
    • TLS_KRB5_WITH_3DES_EDE_CBC_MD5
    • TLS_KRB5_WITH_RC4_128_MD5
    • TLS_KRB5_WITH_IDEA_CBC_MD5
    • TLS_KRB5_EXPORT_WITH_DES_CBC_40_MD5
    • TLS_KRB5_EXPORT_WITH_RC2_CBC_40_MD5
    • TLS_KRB5_EXPORT_WITH_RC4_40_MD5

    SHA-2 family of hash functions are completely compliant to the policy. The profile is safe to enabled the those cipher suites based on SHA-2

    • TLS_RSA_WITH_NULL_SHA256
    • TLS_RSA_WITH_AES_128_CBC_SHA256
    • TLS_RSA_WITH_AES_256_CBC_SHA256
    • TLS_DH_DSS_WITH_AES_128_CBC_SHA256
    • TLS_DH_RSA_WITH_AES_128_CBC_SHA256
    • TLS_DHE_DSS_WITH_AES_128_CBC_SHA256
    • TLS_DHE_RSA_WITH_AES_128_CBC_SHA256
    • TLS_DH_DSS_WITH_AES_256_CBC_SHA256
    • TLS_DH_RSA_WITH_AES_256_CBC_SHA256
    • TLS_DHE_DSS_WITH_AES_256_CBC_SHA256
    • TLS_DHE_RSA_WITH_AES_256_CBC_SHA256
    • TLS_DH_anon_WITH_AES_128_CBC_SHA256
    • TLS_DH_anon_WITH_AES_256_CBC_SHA256
    • TLS_RSA_WITH_AES_128_GCM_SHA256
    • TLS_RSA_WITH_AES_256_GCM_SHA384
    • TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
    • TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
    • TLS_DH_RSA_WITH_AES_128_GCM_SHA256
    • TLS_DH_RSA_WITH_AES_256_GCM_SHA384
    • TLS_DHE_DSS_WITH_AES_128_GCM_SHA256
    • TLS_DHE_DSS_WITH_AES_256_GCM_SHA384
    • TLS_DH_DSS_WITH_AES_128_GCM_SHA256
    • TLS_DH_DSS_WITH_AES_256_GCM_SHA384
    • TLS_DH_anon_WITH_AES_128_GCM_SHA256
    • TLS_DH_anon_WITH_AES_256_GCM_SHA384
    • TLS_PSK_WITH_AES_128_GCM_SHA256
    • TLS_PSK_WITH_AES_256_GCM_SHA384
    • TLS_DHE_PSK_WITH_AES_128_GCM_SHA256
    • TLS_DHE_PSK_WITH_AES_256_GCM_SHA384
    • TLS_RSA_PSK_WITH_AES_128_GCM_SHA256
    • TLS_RSA_PSK_WITH_AES_256_GCM_SHA384
    • TLS_PSK_WITH_AES_128_CBC_SHA256
    • TLS_PSK_WITH_AES_256_CBC_SHA384
    • TLS_PSK_WITH_NULL_SHA256
    • TLS_PSK_WITH_NULL_SHA384
    • TLS_DHE_PSK_WITH_AES_128_CBC_SHA256
    • TLS_DHE_PSK_WITH_AES_256_CBC_SHA384
    • TLS_DHE_PSK_WITH_NULL_SHA256
    • TLS_DHE_PSK_WITH_NULL_SHA384
    • TLS_RSA_PSK_WITH_AES_128_CBC_SHA256
    • TLS_RSA_PSK_WITH_AES_256_CBC_SHA384
    • TLS_RSA_PSK_WITH_NULL_SHA256
    • TLS_RSA_PSK_WITH_NULL_SHA384
    • TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
    • TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384
    • TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256
    • TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384
    • TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
    • TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
    • TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256
    • TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384
    • TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
    • TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
    • TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256
    • TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384
    • TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
    • TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
    • TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256
    • TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384
    • TLS_ECDHE_PSK_WITH_AES_128_CBC_SHA256
    • TLS_ECDHE_PSK_WITH_AES_256_CBC_SHA384
    • TLS_ECDHE_PSK_WITH_NULL_SHA256
    • TLS_ECDHE_PSK_WITH_NULL_SHA384

    Those cipher suites with MAC algorithm of SHA-1 are addressed at the follow principles.

  • Principle 2: Federal agencies should stop using SHA-1 for digital signatures, digital time stamping and other applications that require collision resistance as soon as practical, and must use the SHA-2 family of hash functions for these applications after 2010.
    Profile ServerKeyExchange Message

    ServerKeyExchange depends on digital signature, the profile should stop using SHA-1 hash function for ServerKeyExchange handshaking message.

    TLS 1.0 and TLS 1.1 use SHA-1 ( or with MD5 at the same time) to generate the digest for the "signature". There is no way to disable SHA-1 in ServerKeyExchange handshaking message. ServerKeyExchange is a optional handshaking message," it is sent by the server only when the server certificate message (if sent) does not contain enough data to allow the client to exchange a premaster secret. This is true for the following key exchange methods:"

    • RSA_EXPORT (if the public key in the server certificate is longer than 512 bits)
    • DHE_DSS
    • DHE_DSS_EXPORT
    • DHE_RSA
    • DHE_RSA_EXPORT
    • DH_anon

    For TLS 1.0 and TLS 1.1, the profile needs to disable the above key exchange methods, for the purpose of preventing the ServerKeyExchange handshaking message occurred, by disabling the following cipher suites:

    • TLS_RSA_EXPORT_WITH_RC4_40_MD5
    • TLS_RSA_EXPORT_WITH_RC2_CBC_40_MD5
    • TLS_RSA_EXPORT_WITH_DES40_CBC_SHA
    • TLS_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA
    • TLS_DHE_DSS_WITH_DES_CBC_SHA
    • TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA
    • TLS_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA
    • TLS_DHE_RSA_WITH_DES_CBC_SHA
    • TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA
    • TLS_DH_anon_EXPORT_WITH_RC4_40_MD5
    • TLS_DH_anon_WITH_RC4_128_MD5
    • TLS_DH_anon_EXPORT_WITH_DES40_CBC_SHA
    • TLS_DH_anon_WITH_DES_CBC_SHA
    • TLS_DH_anon_WITH_3DES_EDE_CBC_SHA
    • TLS_DHE_DSS_WITH_AES_128_CBC_SHA
    • TLS_DHE_RSA_WITH_AES_128_CBC_SHA
    • TLS_DH_anon_WITH_AES_128_CBC_SHA
    • TLS_DHE_DSS_WITH_AES_256_CBC_SHA
    • TLS_DHE_RSA_WITH_AES_256_CBC_SHA
    • TLS_DH_anon_WITH_AES_256_CBC_SHA

    In TLS 1.2, the hash function used with ServerKeyExchange may be other than SHA-1, the following rules defined:

    • Signature Algorithm Extension: If the client has offered the "signature_algorithms" extension, the signature algorithm and hash algorithm used in ServerKeyExchange message MUST be a pair listed in that extension.

      Per this rule, the profile requires that the "signature_algorithms" extension sent by client should include only SHA-2 hash algorithms or stronger, and must not include the hash algorithms: "none", "md5", and "sha1".

    • Compatible with the Key in Server's EE Certificate: the hash and signature algorithms used in ServerKeyExchange message MUST be compatible with the key in the server's end-entity certificate.

      Per this rule, the profile requires that the server end-entity certificate must be signed with SHA-2 or stronger hash functions.

      Note that, at present, the DSA(DSS) may only be used with SHA-1, the profile will not allow server end-entity certificate signed with DSA(DSS).

    Profile Server Certificate

    In TLS 1.0/1.1, there is no way for client to indicate the server what kind of server certificates it would accept. What we can do here is from the point of programming and managerment, the profile requires all server certificates must be signed with SHA-2 or stronger hash functions, and carefully checking that there is no certificate in the chain signed with none SHA-2-or-stronger hash functions.

    In TLS 1.2, there is a protocol specified behavior, "signature_algorithms" extension. "If the client provided a 'signature_algorithms' extension, then all certificates provided by the server MUST be signed by a hash/signature algorithm pair that appears in that extension." Per the specific, the profile requires that the "signature_algorithms" extension sent by client should include only SHA-2 hash algorithms or stronger, and must not include the hash algorithms: "none", "md5", and "sha1"

    However, "signature_algorithms" extension is not a mandatory extension in TLS 1.2, while server does not receive the "signature_algorithms" extension, it also needs to ship the NIST principle. So the profile still requires all server certificates must be signed with SHA-2 or stronger hash functions from the point of programming and management.

    Profile Client Certificate

    In TLS 1.0/1.1, there is no way for server to indicate the client it would accept what kind of hash algorithm used to signed the client certificates. What we can do here is from the point of programming and managerment, the profile requires all client certificates must be signed with SHA-2 or stronger hash functions.

    TLS 1.2 extends the CertificateRequest handshaking message with a addtional field, "supported_signature_algorithms", to indicate to the client which signature/hash algorithm pairs may be used in digital signatures. The profile requires that the "supported_signature_algorithms" field must include only SHA-2 hash algorithms or stronger, and must not include the hash algorithms: "none", "md5", and "sha1".

  • Principle 3: After 2010, Federal agencies may use SHA-1 only for the following applications:
    • hash-based message authentication codes (HMACs);
    • key derivation functions (KDFs);
    • random number generators (RNGs).

    Except the ServerKeyExchange, server Certificate and client Certificate messages, the hash function used in TLS protocols is for HMAC, KDF or RNG, which is allowed by the policy. Need no addtional profile for this principle.

  • Principle 4: Regardless of use, NIST encourages application and protocol designers to use the SHA-2 family of hash functions for all new applications and protocols.

    TLS 1.0 and TLS 1.1 is totally depends on SHA-1 and MD5, there is no way to obey this principle. In order to fully remove the dependency on SHA-1/MD5, one have to upgrade to TLS 1.2 or later revisions.


A stric mode profile

  1. Disable all cipher suites which mac algorithm is MD5;
  2. Disable all cipher suites which may trigger ServerKeyExchange message;
  3. Accept only those certificates that signed with SHA-1 or stronger hash functions;
  4. Upgrade to TLS 1.2 for purpose of fully remove the dependence on weak hash functions.

Put it into practice

Currently, Java SDK does not support TLS 1.1 or later. The proposals talked here are for TLS 1.0, which is implemented by the default SunJSSE provider.

  1. Disable cipher suite

    JSSE has no APIs to disable a particular cipher suite, but there are APIs to set which cipher suites could be used at handshaking. Refer to SSLSocket.setEnabledCipherSuites(String[] suites), SSLServerSocket.setEnabledCipherSuites(String[] suites), SSLEngine.setEnabledCipherSuites(String[] suites) for detailed usage.

    By default, SunJSSE enables both MD5 and SHA-1 based cipher suites, and those cipher suites that trigger ServerKeyExchange massage. In FIPS mode, SunJSSE enable SHA-1 based cipher suites only, however some of those cipher suites that trigger ServerKeyExchange also enabled. So, considering the above strict mode profile, the coder must explicit call SSLX.setEnabledCipherSuites(String[] suites), and the parameter "suites" must not include MD5 based cipher suites, and those cipher suites triggering handshaking message, ServerKeyExchange.

  2. Constrain certificate signature algorithm

    The strict profile suggest all certificates should be signed with SHA-2 or stronger hash functions. In JSSE, the processes to choose a certificate for the remote peer and validate the certificate received from remote peer are controlled by KeyManager/X509KeyManager and TrustManager/X509TrustManager. By default, the SunJSSE provider does not set any limit on the certificate's hash functions. Considerint the above strict profile, the coder should customize the KeyManager and TrustManager, and limit that only those certificate signed with SHA-2 or stronger hash functions are available or trusted.

    Please refer to the section of X509TrustManager Interface in JSSE Reference Guide for details about how to customize trust manager by create your own X509TrustManager; and refer to the section of X509KeyManager Interface in JSSE Reference Guide for details about how to customize key manager by create your own X509KeyManager


Note that the above profile and suggestions are my personal understanding of the NIST's policy and TLS, they are my very peronal suggestions, instead of official proposals from official orgnization.


[DB] DB 암호제품 보안요구사항

2010. 5. 27. 10:20 | Posted by 꿈꾸는코난

 DB 암호제품 보안요구사항


DB 암호제품 보안요구사항(2010.4)

1. 소개
2. 제품 개요
3. 보안 위협
4. 보안요구사항
    4.1 암호지원
    4.2 암호키관리
    4.3 DB 데이타 암/복호화
    4.4 접근통제
    4.5 암호통신
    4.6 식별 및 인증
    4.7 보안감사
    4.8 보안관리
5. 제품 운용 시 고려사항

출처 : 국가정보원 홈페이지 - IT 보안인증사무국

[기타] reCAPTCHA

2009. 9. 8. 17:10 | Posted by 꿈꾸는코난


reCAPTCHA is a free CAPTCHA service that helps to digitize books, newspapers and old time radio shows. Check out our paper in Science about it (or read more below).

A CAPTCHA is a program that can tell whether its user is a human or a computer. You've probably seen them — colorful images with distorted text at the bottom of Web registration forms. CAPTCHAs are used by many websites to prevent abuse from "bots," or automated programs usually written to generate spam. No computer program can read distorted text as well as humans can, so bots cannot navigate sites protected by CAPTCHAs.

About 200 million CAPTCHAs are solved by humans around the world every day. In each case, roughly ten seconds of human time are being spent. Individually, that's not a lot of time, but in aggregate these little puzzles consume more than 150,000 hours of work each day. What if we could make positive use of this human effort? reCAPTCHA does exactly that by channeling the effort spent solving CAPTCHAs online into "reading" books.

To archive human knowledge and to make information more accessible to the world, multiple projects are currently digitizing physical books that were written before the computer age. The book pages are being photographically scanned, and then transformed into text using "Optical Character Recognition" (OCR). The transformation into text is useful because scanning a book produces images, which are difficult to store on small devices, expensive to download, and cannot be searched. The problem is that OCR is not perfect.

Example of OCR errors

reCAPTCHA improves the process of digitizing books by sending words that cannot be read by computers to the Web in the form of CAPTCHAs for humans to decipher. More specifically, each word that cannot be read correctly by OCR is placed on an image and used as a CAPTCHA. This is possible because most OCR programs alert you when a word cannot be read correctly.

But if a computer can't read such a CAPTCHA, how does the system know the correct answer to the puzzle? Here's how: Each new word that cannot be read correctly by OCR is given to a user in conjunction with another word for which the answer is already known. The user is then asked to read both words. If they solve the one for which the answer is known, the system assumes their answer is correct for the new one. The system then gives the new image to a number of other people to determine, with higher confidence, whether the original answer was correct.

Currently, we are helping to digitize old editions of the New York Times.

How can I help?

In order to achieve our goal of digitizing books, we need your help.

If you run a website that suffers from problems with spam, you can put reCAPTCHA on your site. For some applications (such as Wordpress and Mediawiki), we have plugins that allow you to use reCAPTCHA without writing any code. We also have easy-to-use code for common web programming languages such as PHP.

If you get email spam we have a method that will help you to reduce it. Many spammers crawl the web looking for email addresses. When they see an email address on a web page, they send spam to the address. Mailhide allows you to safely post your email address on the web. Mailhide takes an address such as jsmith@example.com and turns it into jsm...@example.com. In order to reveal the address, a user must click on the "..." and solve a reCAPTCHA. If you use the Mailhide version of your email address, spammers won't be able to find your real email address and you'll get less spam.

이전 1 다음