Huawei Optix OSN 8800 Release 5.51.07.36 and Optix OSN 1800 Release 5.67.03.22 - part 1

 

  Index      Manuals     Huawei Optix OSN 8800 Release 5.51.07.36 and Optix OSN 1800 Release 5.67.03.22

 

Search            copyright infringement  

 

 

 

 

 

 

 

 

 

Content   ..    1  2   ..

 

 

Huawei Optix OSN 8800 Release 5.51.07.36 and Optix OSN 1800 Release 5.67.03.22 - part 1

 

 

Redpaper
Bill White
Pasquale Catalano
Andrew Crimmins
IBM System z Qualified DWDM:
and Optix OSN 1800 Release 5.67.03.22
This IBM® Redpaper™ publication is one in a series that describes IBM System z® qualified
dense wavelength division multiplexing (DWDM) vendor products for IBM Geographically
Dispersed Parallel Sysplex™ (IBM GDPS®) solutions with Server Time Protocol (STP). The
protocols described in this paper are used for IBM supported solutions that require cross-site
connectivity of a multisite Parallel Sysplex or remote copy technologies, which can include
GDPS and non-GDPS applications. GDPS qualification testing is conducted at the IBM
Vendor Solutions Connectivity (VSC) Lab in Poughkeepsie, NY.
IBM and Huawei Technologies Co., Ltd. successfully completed qualification testing of the
Optix OSN 8800 and Optix OSN 1800 platforms. This paper describes the applicable
environments, protocols, and topologies that are qualified and supported by System z for
connecting through the Optix OSN 8800 and Optix OSN 1800 platforms hardware and
software, Release level 5.51.07.36 (Optix OSN 8800) and 5.67.03.22 (Optix OSN 1800).
This paper is intended for anyone who wants to learn more about Optix OSN 8800 and Optix
OSN 1800 Release level 5.51.07.36 (Optix OSN 8800) and 5.67.03.22 (Optix OSN 1800).
This document is not meant to determine qualified products. To ensure that the planned
products to be implemented are qualified, registered users can see the
IBM Resource Link® library for current information about qualified DWDM vendor products:
For more information about IBM Redbooks® publications for System z qualified DWDM
vendor products, see this website:
ibm.com/redbooks
1
System z GDPS qualification overview
GDPS is an enterprise-wide continuous availability (CA) and disaster recovery (DR)
automation solution that can manage recovery from planned and unplanned outages across
distributed servers and System z servers. GDPS can be configured in either a single site or in
a multisite configuration. It is designed to manage remote copy configuration between storage
subsystems, automate Parallel Sysplex operational tasks, and affect failure recovery. This
configuration is done from a single point-of-control, which leads to improved application
availability. Historically, this solution was known as a GDPS. Today, GDPS continues to be
applied as a general term for a suite of business continuity solutions. This term includes
solutions that do not require a dispersed or multisite sysplex environment.
GDPS supports the following forms of remote copy in multisite solutions:
IBM System Storage® Metro Mirror, a synchronous form of remote copy previously known
as Peer-to-Peer Remote Copy (PPRC)
IBM System Storage Global Mirror, an asynchronous form of remote copy for System z
and distributed systems
IBM System Storage z/OS® Global Mirror, an asynchronous form of remote copy for
System z, previously known as extended remote copy (XRC)
Depending on the form of the remote copy that is implemented, the GDPS solution is known
as one of these configurations:
GDPS with Metro Mirror
GDPS with Global Mirror
GDPS with z/OS Global Mirror
GDPS also offers two solutions that combine the technologies of Metro Mirror with either
Global Mirror or z/OS Global Mirror. This combination allows clients to meet requirements for
CA with zero data loss locally within metropolitan distances (for most failures). The
combination also provides a disaster recovery solution in the case of a region-wide disaster.
The disaster recovery solutions include the following options:
GDPS with Metro and Global Mirror (GDPS and MGM), which is a cascading data
replication solution for both System z and distributed systems data
GDPS with Metro and z/OS Global Mirror (GDPS with MzGM), which is a multitarget data
replication solution for System z data
The GDPS solution is also independent of the disk vendor, if the vendor meets the specific
levels of Metro Mirror, Global Mirror, and z/OS Global Mirror architectures.
For more information about GDPS, see this website:
IBM supports DWDM products that are qualified by System z for use in GDPS solutions. To
obtain this qualification, DWDM vendors obtain licensed IBM patents, intellectual property,
and know-how that are related to the GDPS architecture. This licensing provides vendors
access to the proprietary IBM protocols and applications that are used in a GDPS
environment. These proprietary items include IBM Sysplex Timer, InterSystem Channel
(ISC), STP, Metro Mirror, Global Mirror, and z/OS Global Mirror.
2
Huawei Optix OSN 8800 Release 5.51.07.36 and Optix OSN 1800 Release 5.67.03.22
Licensing of IBM patents also provides the DWDM vendor with technical information that
pertains to future IBM releases. Qualified vendors typically license this information for an
extended period. This license allows them to subscribe to the latest GDPS architecture
changes and to be among the first to the market with offerings that support these features.
Licensing: Check with your DWDM vendor for current IBM technology and patent
licensing status.
In addition, IBM tested and qualified these vendor products with the same test environment
and procedures that were used to test the protocols that provide the required connectivity of a
GDPS configuration. This testing included functionality and recovery verification. Having
access to these test facilities allows IBM to configure a fully functional sysplex. By using the
test facilities, you can simulate failure and recovery actions that cannot be tested as part of a
working client environment.
IBM has the facilities to test and qualify these products with both current and previous
generation equipment within the VSC Lab. With this qualification testing, IBM can reproduce
any concerns that might arise when using this equipment in a client application.
Figure 1 shows the test environment that is used for DWDM vendor qualification and the
logical connections for IBM System Storage Metro Mirror and Global Mirror.
zEnterprise System
zEnterprise System
zEnterprise System
IEDN
(10 GbE)
OSA-Express ( IEDN 10 GbE)
Qualified
Ethernet
OSA-Express (1/10 GbE)
Switch
ISC-3 Peer (STP Messaging)
PSIFB LR (STP Messaging)
Inter-Switch Link (ISL)
Up to
Up to
100Km*
100Km*
FCP / FICON (1/2/4/8 Gbps)
Qualified
Qualified
FCP/ FICON
FCP/ FICON
Director
Director
FCP/ FICON
FCP/ FICON
IBM System Storage Metro Mirror
System Storage
System Storage
System Storage
*RPQ required for longer distances
Figure 1 VSC Lab environment
Qualification testing
The VSC Lab contains System z hardware with software applications that test the
interoperability of DWDM products within a GDPS. A typical qualification test cycle is six to
eight weeks in length.
The hardware that was used for testing purposes includes (but is not limited to) the following
components:
IBM System z servers enabled for STP
IBM 9037 Model 2 Sysplex Timers
IBM System Storage
Huawei Optix OSN 8800 Release 5.51.07.36 and Optix OSN 1800 Release 5.67.03.22
3
IBM System z qualified Fibre Connection (FICON®) Directors
IBM Ethernet products
IBM zEnterprise® BladeCenter® Extension (10 Gigabit Ethernet)
DWDM links of varying distances are deployed by using spools of single-mode fiber in lengths
5 - 50 km (3.11 - 31.07 miles). Multiple spools are interconnected to test DWDM link protocols
up to the maximum supported distances. To achieve the maximum distances that are
qualified for GDPS protocols, vendors can use optical amplifiers (OAs) and dispersion
compensation units (DCUs). They can insert the OAs and DCUs, at various link points, to
condition the signals on the fiber links. These links are connected to the DWDM equipment.
Qualified DCUs: Fiber-based DCUs are not qualified for use in STP applications unless
stated in the qualification letter.
The operating system and application software are installed to create and to stress test the
GDPS environment. The software used in the test environment includes, but is not limited to,
the following components:
z/OS, Linux on System z, and Parallel Sysplex software exploiters
Coupling Facility Control Code (CFCC)
IBM proprietary software and microcode utility test suites
As part of the GDPS qualification test, IBM proprietary software and microcode utility test
suites are used. The software and test suites drive the various GDPS components and
protocols to the full data rate of each link type that is transported by the DWDM equipment.
This level of testing ensures that the maximum channel utilization is achieved and tested to
levels well beyond typical client environments.
The test suites are used for verification of System z architecture functionality. For a trial to be
classified as successful during these functionality tests, no errors can be detected by the
attached subsystems. Any errors that are detected during this testing are captured and
analyzed by the test suites.
The test suites are also used for verification of System z architecture recovery by creating
various fault and error conditions. The recovery tests check for the correct detection of a fault
or error condition by the attached subsystems, and ensure that the recovery adheres to
System z architecture rules.
Some of the recovery tests conducted for each link type include the following actions:
Link state change interrupt detection and recovery: Links are deliberately broken and
reestablished to ensure that detection and error recovery occur correctly.
Link error threshold and link synchronization error detection and recovery: Errors are
deliberately injected, at the application and channel subsystem levels, into the link protocol
data streams to ensure that detection and error recovery take place correctly.
Link service and maintenance package recovery: Link hardware maintenance actions are
conducted to ensure that link state change detection and recovery take place correctly.
Link protection schemes and recovery: Vendor-specific protection methods are tested to
ensure that the expected link errors are detected and that recovery takes place correctly.
STP timing link recovery: STP timing links are broken to ensure that changes to the
Coordinated Timing Network (CTN) behave correctly when the break occurs and when the
links are reconnected.
4
Huawei Optix OSN 8800 Release 5.51.07.36 and Optix OSN 1800 Release 5.67.03.22
GDPS components and protocols
The IBM technologies featured in this section are functional components of GDPS and are
tested during the qualification process. Clients can also use these components in
environments that do not require a full GDPS solution. The testing provides a level of
assurance that the components function when used with a qualified DWDM platform.
Components
The following GDPS components are tested during the qualification process:
IBM System z servers
IBM Parallel Sysplex
IBM System Storage
IBM System Storage Metro Mirror (PPRC)
IBM System Storage Global Mirror
IBM System Storage z/OS Global Mirror (XRC)
Vendor DWDM platform
IBM Ethernet products
IBM zEnterprise BladeCenter Extension
Inter-Switch Links (ISLs) between two System z qualified FICON/Fibre Channel Directors
Protocols
Table 1 lists the GDPS connectivity protocols and their data transfer rates. Not all protocols
were tested on the Optix OSN 8800 and Optix OSN 1800 platforms. For a complete list of the
protocols and interface cards that were qualified, see Table 2 on page 14.
Table 1 GDPS supported protocols
Protocol
Data transfer rate
Enterprise Systems Connection (IBM ESCON®)
200 Mbpsa
Control Link Oscillator (CLO)
8 Mbps
External Time Reference (ETR)
8 Mbps
Fibre Connection (FICON)
1 Gbps
Fibre Connection (FICON) Express2
1, 2 Gbps
Fibre Connection (FICON) Express4
1, 2, 4 Gbps
Fibre Connection (FICON) Express8
2, 4, 8 Gbps
Fibre Channel FC100/ FC200/ FC400/ FC800
1, 2, 4, 8 Gbps
Inter-Switch Link (ISL) FC100/ FC200/ FC400/ FC800/ FC1000/
1, 2, 4, 8, 10, 16 Gbps
FC1600
InterSystem Channel-3 (ISC-3) Peer Mode
2 Gbps
STP (ISC-3 Peer Mode with STP message passing)
2 Gbps
STP (Parallel Sysplex InfiniBand Long Reach (PSIFB LR) 1x IB-single
2.5 Gbps
data rate (SDR) with STP message passing)
Huawei Optix OSN 8800 Release 5.51.07.36 and Optix OSN 1800 Release 5.67.03.22
5
Protocol
Data transfer rate
STP (PSIFB LR 1x IB-double data rate (DDR) with STP message
5 Gbps
passing)
Gigabit Ethernet (GbE)
1, 10 Gbps
a. Effective channel data rate of an ESCON channel is affected by distance.
Often, these supported protocols are also used in environments that are not GDPS. Robust
testing is conducted during the qualification process. The results must provide clients with a
high level of confidence when using these System z qualified DWDM vendor platforms in
environments that are not GDPS.
Server Time Protocol
Server Time Protocol (STP) is designed to provide the capability for multiple servers and
Coupling Facilities (CFs) to maintain time synchronization with each other without requiring
an IBM Sysplex Timer. STP can help provide functional and economic benefits when
compared to the Sysplex Timer. STP allows concurrent migration from an ETR network and
can coexist with an ETR network.
STP is a message-based protocol in which STP timekeeping information is passed over
externally defined coupling links: InterSystem Channel-3 (ISC-3) Peer Mode and PSIFB links.
ISC-3 links in peer mode and PSIFB Long Reach (LR) are the only coupling links that can be
used to transport STP messages between data centers over a DWDM platform.
The STP design introduced a concept called Coordinated Timing Network (CTN), which is a
collection of servers and CFs that are time synchronized to a time value. This value is called
Coordinated Server Time (CST). The CST represents the time for the entire network of
servers. A CTN can be configured as either an STP-only CTN or a Mixed CTN. For more
information, see the following resources:
Server Time Protocol Planning Guide, SG24-7280
Server Time Protocol Implementation Guide, SG24-7281
Server Time Protocol Recovery Guide, SG24-7380
6
Huawei Optix OSN 8800 Release 5.51.07.36 and Optix OSN 1800 Release 5.67.03.22
Figure 2 shows a multisite STP-only CTN.
Site 1
PSIFB LR
Site 2
or
System z
System z
ISC-3 Peer
with STP Messaging
Qualified
Ethernet
Switch
FCP/FICON
OSA-Express
FCP/FICON
(1 GbE and 10 GbE)
Qualified
Qualified
FICON
FICON
Director
Director
ISL
FCP/FICON
FCP/FICON
System Storage
System Storage
Figure 2 Multisite STP-only CTN
IBM Sysplex Timer
The Sysplex Timer is a mandatory component of an ETR network and an STP Mixed CTN.
The Sysplex Timer provides an ETR to synchronize the time-of-day (TOD) clocks on the
attached System z servers in a GDPS or PPRC environment.
Huawei Optix OSN 8800 Release 5.51.07.36 and Optix OSN 1800 Release 5.67.03.22
7
Figure 3 shows a multisite ETR network.
1111
1212
1
1111
1212
1
Site 2
Site 1
Sysplex
1010
2
CLO
1010
2
Sysplex
9
3
9
3
8
4
8
4
System z
Timer
7
6
5
7
6
5
Timer
System z
ETR
Qualified
Ethernet
ISC
Switch
FCP/FICON
OSA-Express
FCP/FICON
(1 GbE and 10 GbE)
Qualified
Qualified
FICON
FICON
Director
Director
ISL
FCP/FICON
System Storage
System Storage
Figure 3 Multisite ETR network with Sysplex Timers
The Sysplex Timer uses two link types:
External time reference (ETR)
ETR links are connections between the Sysplex Timer and the System z server ETR ports
that provide TOD clock synchronization between multiple servers.
Control link oscillator (CLO)
CLO links are connections between two Sysplex Timer units in an expanded availability
configuration that allow synchronization of the Sysplex Timer timing signals.
IBM zEnterprise BladeCenter Extension
The IBM zEnterprise BladeCenter Extension (zBX) is available as an option with the IBM
zEnterprise System. The zBX brings the computing capacity of systems in blade form-factor
to the zEnterprise System and provides several distributed environments, such as IBM AIX®
on POWER7®, Linux on System x®, and Microsoft Windows on System x.
The zBX is designed with a redundant hardware infrastructure that supports the
multi-platform environment in a seamless, integrated way. Its hardware features are part of
the mainframe, not add-ons.
The zBX and the zEnterprise central processor complex (CPC) are interconnected using a
high-speed private network called an intraensemble data network (IEDN), which consists of
top-of-rack 10 GbE switches in the zBX and OSA-Express 10 GbE features in the zEnterprise
CPC. The IEDN provides private and secure data paths between all elements of a
zEnterprise ensemble. A zEnterprise ensemble is a collection of highly virtualized diverse
systems that can be managed as a single logical entity, and where diverse workloads can be
deployed. It is composed of up to eight nodes, each of which consists of a zEnterprise CPC
and an optional zBX (see Figure 4).
8
Huawei Optix OSN 8800 Release 5.51.07.36 and Optix OSN 1800 Release 5.67.03.22
Top-of-rack
zEnterprise CPC
Switches
OSA-Express
10 GbE
Intraensemble
Data Network
zBX
Figure 4 zEnterprise node
GDPS/PPRC and GDPS/GM support zBX hardware components, providing workload failover
for automated multi-site recovery. These capabilities can help facilitate the management of
planned and unplanned outages across the zEnterprise System.
For more information regarding the zBX, see Building an Ensemble Using IBM zEnterprise
Unified Resource Manager, SG24-7921.
Connectivity considerations with DWDM
When planning DWDM intersite connectivity for GDPS environments, consider the following
items:
Differential delay is the difference in the distance or latency between a transmit fiber and
a receive fiber in a single fiber pair of a timing link.
When configuring, adhere to the following requirements:
- To ensure correct server time synchronization, examine the end-to-end lengths of the
transmit and receive fibers within an individual STP link (ISC-3 Peer Mode with STP
messaging or PSIFB). The lengths must not exceed the equivalent of 900 meters
differential delay between the transmit and receive paths of the link. This stipulation
includes all dense wavelength division multiplexing (DWDM) components, OAs,
DCUs,1 dark fiber links, and any time-division multiplexing (TDM)-based aggregation.
- To ensure correct Sysplex Timer and server time synchronization, the end-to-end
lengths of the transmit and receive fibers within an individual ETR or CLO link must be
equal (within 10 meters). This stipulation includes all DWDM components, OAs, DCUs,
and dark fiber links.
Supported devices: DCUs might contain significant lengths of fiber that must be
included in the differential delay calculation for timing links. Not all DCUs are
supported. Always check the qualification letters for supported devices.
If both the CF data and STP timing information must be transmitted between two servers,
you cannot select a subset of coupling links to be used just for STP timing information.
Protection schemes, for individual timing links or for fiber trunks that transport timing links,
must be bidirectional (switch both transmit and receive paths in the event of a failure). This
configuration ensures that the correct differential delay is maintained.
DWDM configurations must have high availability topologies in place to ensure that there
are no single points of failure.
1 Fiber-based DCUs are not qualified for use in STP applications unless stated in the qualification letter.
Huawei Optix OSN 8800 Release 5.51.07.36 and Optix OSN 1800 Release 5.67.03.22
9
Fiber trunk protection schemes must be designed with two trunk switching modules and
four site-to-site fiber pairs that are carried over at least two diverse routes. STP, ETR, and
CLO links must connect by using separate trunk switching modules to ensure that a fiber
trunk protection event does not interrupt all timing links simultaneously.
TDM-based aggregation DWDM hardware can be used for STP links (ISC-3 Peer Mode or
PSIFB LR with STP message passing) only if it is qualified for STP usage.
GDPS DWDM configuration must be a point-to-point fixed dark fiber network.
For more information about the STP recovery process, see the Server Time Protocol
Recovery Guide, SG24-7380.
Technical description
The Optix OSN 8800 and Optix OSN 1800 platforms are scalable, high-speed fiber-optic data
transport systems. They consists of a modular chassis, which can be interconnected to form
an optical network that supports International Telecommunications Union (ITU)-specific
wavelengths. The wavelengths are multiplexed onto a single pair of fibers by using DWDM.
The optical network that connects the Optix OSN 8800 and Optix OSN 1800 can be
configured in a two-site point-to-point, multisite ring or meshed network.
Support note: IBM only qualifies point-to-point topologies for GDPS solutions. Other
topologies might contain significant length variations of fiber that go beyond the differential
delay limits for timing links and are, therefore, not supported.
Multiple client interfaces can be aggregated onto a single wavelength with the use of
Muxponder interface cards.
The Optix OSN 8800 and Optix OSN 1800 platforms were qualified by using software
Release level 5.51.07.36 (Optix OSN 8800) and 5.67.03.22 (Optix OSN 1800). They have a
modular chassis that can house multiple optical interface cards, depending on the chassis.
Multiple Optix OSN 8800 and Optix OSN 1800 chassis can be interconnected to support
larger quantities of client interfaces.
Interface cards and modules
This section includes a detailed list of the Optix OSN 8800 and Optix OSN 1800 optical
interface cards and modules that are qualified by System z GDPS testing.
The following optical interface cards and modules are qualified by System z GDPS testing:
Muxponder Cards
The Optix OSN 8800 and Optix OSN 1800 supports the aggregation of client signals.
Muxponder cards use time-division multiplexing (TDM) to aggregate multiple client optical
interfaces for transport over a single ITU-specific wavelength.
Muxponder cards have pluggable client optical transceivers that support different client
protocols and fiber types.
The following Optix OSN 8800 Muxponder card types are qualified:
- TN12LOM: 8 client port, 1 (10 Gbps) line port muxponder module
- TN11LOA: 8 client port, 1 (10 Gbps) line port muxponder module
10
Huawei Optix OSN 8800 Release 5.51.07.36 and Optix OSN 1800 Release 5.67.03.22

 

 

 

 

 

 

 

Content   ..    1  2   ..