We invite you on a tour of our manufacturing facilities in Carrollton, Georgia, USA. View the highly automated OFS manufacturing process that produces a wide variety of fiber optic products for telecommunications applications. Loose tube, microcables, flat ribbon, ADSS, ultra high density rollable ribbon cables and premise cables are all made here.
The facility is registered in compliance with the ISO 9001, ISO 14000, and TL 9000 standards. Traceability is maintained through every step of the process and ultimately back to the incoming fiber. The facility also has a fully functional product qualification lab and cable installation test track.
OFS Uses Both 200 and 250 Micron Fibers
OFS makes several different fiber structures in the Carrollton facility, including loose tube, flat ribbon and rollable ribbon structures. These structures are used in different cable types and applications.
Statistical Process Control Techniques
Each stage in the manufacturing process is highly controlled with appropriate dimensional targets and tolerances.
Colored ink is applied to Fiber
The industry standard color code is used to provide clear identification of the fibers over their lifetimes. Colored ink is applied to specified thicknesses, cured, and respooled for the next step in the process.
Buffer Tube Manufacturing Process
To make loose tubes, fibers or ribbons are paid off of their spools and a buffer tube is extruded around them. The Carrollton plant makes gel-free and gel-filled buffer tubes of different materials, including polypropylene and PBT. Different sized buffer tubes are used for different product types. Buffer tubes used in outside plant applications include either water blocking materials impregnated with super absorbent polymer or gel.
Ribbon Manufacturing Process
A matrix material is applied to the fibers to bind them together so they can be spliced as a group. 12 and 24 fiber flat ribbons are most common. Fiber color code alignment and geometric specifications are very important so ribbons can be spliced and connected in the field.
Rollable ribbons are only partially bonded together, enabling them to be rolled into a cylindrical package. Rollable ribbons are only partially bonded together, enabling them to be rolled into a cylindrical package. Since circles are more space efficient than rectangles, rollable ribbons cables can hold twice the fibers as comparable sized flat ribbon cables. Since these fibers are partially bonded, they can be easily spliced either as single fibers or as a ribbon, giving more deployment flexibility to the network operator.
OFS makes two main types of cables – stranded cables and central tube cables.
The first Article in this series focused on growth in bandwidth demand and attenuation in optical fibers. Article 2 concentrated on the several types of dispersion that exist in fiber today, closely followed by Article 3 – Fiber strength and reliability. Article 4 featured single-mode fiber geometries and now the latest release from OFS – Article 5, deals with “Cut-off Wavelength” (COW). This latest article will help the user to understand what “cut-off wavelength” is, why does it matter and how is it measured.
CUT-OFF WAVELENGTH MAKES A COMEBACK IN IMPORTANCE
Take an intro to fiber course, and you’ll learn about attenuation, dispersion, fiber geometry, maybe fiber strength. However, buried in the fine print of fiber specs is a parameter called cutoff wavelength. Although fiber manufacturers and some fiber users are aware of cutoff wavelength, it’s not as famous a parameter. Users still focus primarily on attenuation, maybe dispersion, as critical propagation properties.
Due to relatively new operating wavelength requirements below 1310 nm, some astute end users are taking a renewed look at cutoff wavelength specifications. Relatively new Passive Optical Network (PON) protocols are looking at wavelengths as low as 1270 nm, which, optical spectrum-speaking, is a hop and a jump away from the historical cable cutoff wavelength specification λcc, 1260 nm.
Different fibers do different jobs in today’s networks. Some fibers are bend insensitive. Some fibers enable more power to increase signal-to-noise ratios at ultra-high speeds. Cutoff wavelength is important for the performance of these fibers, but these fibers may require differences in measurement methods versus “standard” G.652-type fibers. We’ll talk about measurement methods later in this paper.
WHAT IS CUT-OFF WAVELENGTH?
In an optical fiber a number of different light-“modes” may exist. Modes are different types of light waves which may each carry different portions of light from the input to the output of the fiber.
A multimode fiber, by definition, may carry several modes of light (several hundred), but in a single-mode fiber, only one mode is carried.
The wavelength at which the fiber is at the cusp of changing from single-mode to multi-mode is called the Cut-off Wavelength. Typically, a fiber is considered to be single mode for wavelengths longer than the Cut-off Wavelength (COW) – and multimode at shorter wavelengths. In real life, the transition from single mode to multi-mode transmission does not occur abruptly at an isolated wavelength – but rather relatively smoothly over a range of wavelengths. The single-wavelength number on a specification is a simplification.
The most common way to make a fiber single-mode is to reduce its core-size (diameter), but the contrast between the refractive index of the fiber core and the fiber cladding is also important. These two properties determine the “effective core size” which determines if the fiber is single mode or not at a given wavelength.
WHY IS IT IMPORTANT?
The whole idea of a single-mode fiber is to keep the other modes out of the optical transmission. The reason is why Multi-Mode fibers are used for short distance transmission only: different modes traveling through a fiber take different paths.
An optical pulse injected into a multi-mode fiber will be transmitted via different modes reaching the end of the fiber after slightly different travel times. Once the modes are recombined at the output of the fiber, the shape of the input pulse will have been distorted (blurred). This distorting effect on the pulse is known as modal dispersion and affects the bandwidth (MHz-km) of a multimode fiber. Single-mode fiber does not have modal dispersion, and consequently has much higher bandwidth over dramatically longer distances.
The concept of cutoff wavelength is receiving renewed attention as next generation PON systems begin operating at wavelengths shorter than 1310 nm. This will be discussed in more depth later in the paper.
“ONLY ONE LIGHT MODE IN MY SINGLE MODE FIBER, PLEASE!”
The light mode intended for transmission in a single mode fiber is called the Fundamental Mode (also called LP01). All other modes are called Higher Order Modes, the most important of those is the Secondary Mode (LP11).
And one might ask: “Where do those higher order modes come from?”
These modes may be generated at splices and connections between fibers. The higher the splice/connection loss, the more powerful the Higher Order Modes (HOM) tend to be generated.
HOMs are also called “leaky modes” because they are bound only loosely to the fiber core and tend to leak out of the fiber after traveling a relatively short distance – for wavelengths longer than the Cut-off Wavelength. The longer the distance, the more of those modes will have leaked out.
The Cut-off Wavelength is defined as the wavelength at which the power level of the Higher Order Modes has been reduced by 19.3 dB relative to the level of the Fundamental Mode (strictly speaking this is true only for the Second Order Mode).
MODAL NOISE PROBLEMS
Do we need to worry about modal noise in today’s systems? Without wanting to raise unwarranted concerns, we want to highlight possible situations where modal noise may arise and be problematic.
As briefly mentioned, splices or connectors will cause some of the light in the Fundamental Mode to be coupled into higher order modes. And likewise, light from Higher Order light modes may be coupled down into the fundamental mode by similar splices or connectors.
Situations may arise where two splices or connectors are situated closely to each other. If they are too close, or if the Cut-off Wavelength is too high, parts of the HOMs generated at the first splice/connector may be coupled back down into the Fundamental Mode at the second splice/connector and mixed with light from the initial Fundamental Mode.
This may cause a problem because the travelling time for the light signal may typically be different in the Higher Order Modes than in the Fundamental Mode – and so the two signals may be somewhat out of phase when mixed together. Because of polarization and other effects, such phase difference may change as a result of temperature variation and stress, and this can result in a type of noise called Modal Noise.
To create significant levels of modal noise two joints with large connection losses must exist (one is not enough). Furthermore, the two joints must be so closely spaced that Higher Order Modes have not leaked out of the fiber before reaching the second joint. And finally, the laser used must exhibit some level of mode partitioning.
WHY ARE THERE DIFFERENT TYPES OF CUT-OFF WAVELENGTH?
The Cut-off Wavelength of a fiber depends on the length of the fiber, the longer the fiber the lower the Cut-off Wavelength tends to be. For that reason, there are 3 types of Cut-off Wavelengths defined to match different applications:
CABLE CUT-OFF WAVELENGTH (λcc): It is very easy to mistake this term for “Cabled COW” – but in principle it does not really matter whether the fiber is cabled or not. The initial intention with this parameter was to simulate a situation with two closely spaced cable splices, for example in a repair situation. 20 meters splicing distance was considered a relevant minimum – and to simulate fiber being deployed in splice cassettes, a 1-meter fiber length including one 80 mm loop was added in each end for the measurement set up. Effectively the full length of the fiber measured is 22 meters.
JUMPER CUT-OFF WAVELENGTH (λcj): As the name suggests it simulates a jumper cable. It is measured on a 2-meter length of fiber with one winding which diameter may be freely defined – which in the US is typically 152 mm.
FIBER CUT-OFF WAVELENGTH (λcf): As the name suggests it simulates a fiber being bent in only large diameters. In principle it is measured on a 2-meter length of fiber with one 280 mm diameter, but as explained later special care must be taken during measuring – especially for bend-insensitive fibers.
Because Cable COW is measured on a 22 m fiber sample whereas Fiber COW is measured on 2 m fiber, Fiber COW is typically higher than Cable COW.
Normally it is possible to find a good statistical correlation between the Cable COW and the Fiber COW. Since only 2-meter fiber is needed when measuring Fiber COW, it is easier to measure Fiber COW than Cable COW where 22 meters are needed. And because of the correlation Fiber COW measurements are often sufficient to ensure that the Cable COW is within limits.
Also, cabling fibers often has an effect of stripping out higher order modes, due to either macro or micro-bending of the fibers inside the cable sheath. In situations where this effect is significant, Cable COW measured on a cabled fiber may be lower than Cable COW measured on a non-cabled fiber (and of course lower than Fiber COW).
It depends on the fiber type and design, but a realistic example for a step-index fiber is a fiber cutoff of 1350 nm could have a corresponding cable COW of 1260 nm.
WHY IS CABLE CUT-OFF WAVELENGTH SPECIFIED AS 1260 nm IN IEC AND ITU-T RECOMMENDATIONS?
Initially Single Mode fibers were intended for 1310 nm operation, and manufacturing variability of lasers were rather large, so lasers sold as “1310 nm lasers” could in fact emit light at rather different wavelengths than 1310 nm. So, to create a certain “guard band”, the maximum Cable COW was defined to be 1260 nm.
Today, laser wavelengths may be more tightly controlled and extremely accurate, but those lasers may also be more costly than less accurate lasers. Furthermore, some FTTH/PON transmission formats (especially newer ones such as XGS-PON) use wavelengths close to 1260 nm. So, the 1260 nm COW today has renewed relevance.
BEND-INSENSITIVE FIBERS – AND PROBLEMS MEASURING THEIR ATTENUATION
This may seem like a subject entirely different from Cut-off Wavelength – but measurement problems are quite similar and may not be obvious at all.
To determine the Cut-off Wavelength, two measurements of the power on the output of the fiber are compared:
A. Power in the Fundamental Mode (LP01) only
B. Power in the Fundamental Mode (LP01) and the Higher Order Modes (which in practical terms means the Second Order Mode: LP11)
“B” is just a simple measurement of output of the fiber – including all modes. But in order to measure “A” we need a filter to get rid of the Higher Order Modes. They are loosely bound modes, and in a standard G.652 fiber they will tend to leak out of the fiber relatively quickly – especially when the fiber is bent in small diameter loops. Two 80 mm loops – perhaps with an added 25 mm loop – in a 2-meter G.652 fiber will do the trick, so this is often used as a HOM filter.
MEASURING ATTENUATION USING THE CUT-BACK METHOD
When measuring the attenuation of a Bend Insensitive fiber – including some advanced Large Area G.654 fibers – different methods may be used. The Cut Back measurement method is often considered the reference method. Light is injected into the Fiber Under Test (FUT) and the aim is to measure the power injected into the fiber (at the beginning of the fiber) and compare that to the measured power at the end of the fiber. Subtract the two and the fiber attenuation is found – divide this by the fiber length, and the attenuation in dB/km is found.
To determine the exact input power level, the fiber is often cut a short distance from the actual input end of the FUT, and the input power level is measured.
It is always important to avoid Higher Order Modes at the input of the measured fiber, since they will leak out of the FUT over a relatively short distance and be missing at the output of the FUT. HOMs are like ghosts – appearing at the input of the FUT and then disappearing along the fiber length. With these around, one would tend to measure a “too high” power level at the input of the fiber – and the resulting calculated fiber attenuation will then be too high. Such increased attenuation will tend to be more pronounced at shorter wavelengths.
Previously, such attenuation measurements were predominantly made on standard G.652 fiber and that was easy. You could grab the first 2 meters of the G.652 FUT, make the desired 2 or 3 loops directly on the fiber itself within those first 2 meters of fiber and that would give you your HOM filter and get rid of the Higher Order Modes. Your cut-back point for attenuation measurement could then be situated right after the first 2 meters of the FUT – so you would only waste 2 – 3 meters of fiber for each measurement.
But on bend insensitive fibers it is not quite as easy to get rid of the higher order modes. These fibers tend to better restrict the light from leaking out during fiber bends. Unfortunately, that is true for the higher order light modes as well – those modes which we would want to get rid of when measuring the fiber.
However, the 2-meter G.652 standard fiber with two 80 mm and one 25 mm loop may still be used. It may be spliced to the Fiber Under Test and if a good splice is obtained only an insignificant level of Higher Order Modes may be generated at that splice point – ensuring a good measurement if the splice point is also used as the Cut Back Point.
Another possibility is to use the first 22 meters of the FUT including the two loops of 80 mm recommended for measuring Cut-off Wavelength. Since that is the recommended test set up for COW measurement, you know that after 22 meters the HOM level is very low at the FUT’s COW or longer wavelengths. The Cut Back Point may be chosen to be just after the first 22 meters of the FUT but if very high precision measurement is required, or if FUT is relatively short, more than 22 meters may be needed.
On OTDR measurements problems caused by HOMs are often hidden. The reason for this is that an OTDR will typically have a “Dead-Zone” during which the light detector of the OTDR is recovering and as such unable to detect the incoming light signal. This may cover some 500 meters during which length the HOMs have since long leaked out of the fiber.
As a quick refresher, article 1 in this series focused on growth in bandwidth demand. We also looked at attenuation in optical fibers caused by factors external to the fiber e.g. bending, and built in attenuation mechanisms i.e. scattering and absorption.
In this second article, we will focus on the several types of dispersion that exist in fiber.
DISPERSION – WHAT IS IT?
Much, but not all, of the traffic traveling through fiber networks takes the form of pulses of laser light. Such a pulse is created by turning a laser on and off, creating light pulses where “no light” represents a digital “0” – and “full light” represents a digital “1”. Digital information is consequently a series of “no light” and “full light” transmitted in a code which a receiver at the other end of the fiber understands and can convert to a digital electrical signal.
Illustrating such a signal would be a series of square pulses as shown in Figure 1.
Whenever such a signal is affected by dispersion, the edges of the square pulses will be rounded, and the pulse will be spread out over time. So dispersion broadens the pulses.
If the dispersion is small, the detector at the other end of the fiber will still be able to detect the signal correctly. Once the dispersion grows too large, the broadened pulses will overlap each other and the detector will start misreading the signal, creating errors that will effectively hamper the transmission quality. A measure of that quality is the BER (Bit Error Rate) which states the number of transmission errors relative to the total number of transmitted bits.
Since a faster transmission rate requires pulses to be of a shorter duration, this also means that a given level of dispersion will be more harmful to faster transmission rate signals. Furthermore, dispersion is almost always dependent on the fiber length – the longer the fiber, the greater the dispersion.
Hence transmission is limited by: A) The dispersion of the fiber B) The transmission rate, and C) The length of the fiber. Dispersion can be described as a “speed limiter”- and the 3 main types are:
Modal Dispersion, Chromatic Dispersion and Polarization Mode Dispersion.
Modal Dispersion is the most serious of the dispersion types, and hence the most severe “speed limiter”.
Light “modes” are different types of waves carrying the light through the fiber. In a “Multi Mode” fiber, the core is rather large and may typically allow up to 17 different modes to propagate. In a “Single Mode” fiber, the core is so small that it will allow only one mode to propagate.
The problem is that the different modes follow different paths through the fiber – and these paths are of different lengths. Some modes travel close to the center of the core – others bounce against the outer edges of the core, and these modes travel a longer way than the ones close to the center. So the different modes travel different distances – and hence some tend to travel faster than others. Parts of the light being injected into the fiber will travel via one mode – other parts via another mode – and so on. If nothing is done to mitigate this, parts of the input signal will arrive at the output later than other parts – and this will cause the output signal to be “dispersed” relative to the input signal as illustrated in Figure 1.
To try to minimize the dispersion of the signal to the output of the fiber, the fiber core of a multimode fiber is designed to delay the Light Modes travelling close to the core (which is the shortest distance) and to speed up the modes travelling the longest distance. In a perfect world this would result in all modes bringing light simultaneously to the output of the fiber. Alas the world is less than perfect, and as such a bit of Modal Dispersion cannot be avoided in real life.
This means that, even though Multimode fibers are able to use very price efficient light sources (like LEDs or VCSELs) they are still limited to transmission distances of typically less than 2 km, actually often less than a few hundred meters.
The way to avoid Modal Dispersion is to shrink the size of the fiber core. In a small fiber core there is only room for one light mode to exist, called the Fundamental Mode. In such single-mode fibers, higher order modes may indeed be generated at splices or connectors, but they will leak out of the fiber after traveling a short distance through the fiber.
Having now found a way to avoid the most important speed limiter we can turn our attention to the next in line.
Chromatic Dispersion means that light of different wavelengths travel with different speeds along the fiber. Again, such a difference results in the “blurring” of the signal on the output side of the fiber and effectively acts as a speed limiter.
One might wonder why this should be such a problem, since lasers used to inject light into the fiber have very precisely defined and stable wavelengths. However, quickly turning a laser light on and off actually by itself generates a number of new wavelengths close to the original laser wavelength. Most of these new wavelengths are luckily quite weak and will not cause problems – but unfortunately as the laser light is turned on and off ever more quickly, the range of generated wavelengths broadens (Figure 5).
In such transmission systems the problems caused by Chromatic Dispersion worsen with increasing transmission speed and with longer fiber lengths (scaling linearly with fiber length).
Trying to minimize problems with Chromatic Dispersion the “Dispersion Shifted” (ITU-T G.653) fiber type was initially developed. In classical standard single-mode (ITU-T G.652) fibers the Chromatic Dispersion is zero around 1310 nm. The Dispersion Shifted fibers were targeted for the Chromatic Dispersion to be zero around 1550 nm, because the attenuation of the fiber is lower at 1550 nm and so this combination seemed ideal.
Basically, this worked fine right up until DWDM arrived. In DWDM systems a number of individual channels are transmitted over the same fiber. Each channel is assigned a unique wavelength, but unfortunately the fiber non-linearity called Four Wave Mixing (FWM) tends to cause unwanted noise problems in DWDM systems if the Chromatic Dispersion in the fiber is very low.
So realizing that some level of Chromatic Dispersion is preferable in order to limit fiber non-linearity problems in DWDM systems, Non-Zero Dispersion Shifted fiber (ITU-T G.655) was developed. This fiber type has a small amount of Chromatic Dispersion around 1550 nm (significantly smaller than standard G.652 fibers) so the “speed limitation” is smaller – but still the Chromatic Dispersion is high enough to reduce non-linearity problems very significantly. Later the G.656 Non- Zero Dispersion Shifted fibers were developed as a response to the demand for an increasing number of channels in DWDM systems. When the number of channels go up, the individual channels need to be packed more closely together – and that in turn requires more Chromatic Dispersion in the fiber to reduce the effect of the Four Wave Mixing.
In parallel with the development of new fiber types with different Chromatic Dispersion characteristics, special devices with negative Chromatic Dispersion were developed. Since transmission fibers normally have positive Chromatic Dispersion, a combination of those two can be used to reduce total Chromatic Dispersion for a full fiber link to almost zero.
With the ability to reduce the total chromatic dispersion of a transmission link, the higher Chromatic Dispersion of the G.656 fibers was consequently an acceptable technical compromise – leaving only cost issues still to be considered.
In many of the recent high-capacity transmission systems, the Chromatic Dispersion of the transmission fiber is compensated electronically with high efficiency, and for such systems fibers with high Chromatic Dispersion may actually be advantageous because it helps to limit fiber non-linearities.
Just to make confusion complete, a single-mode fiber will actually be able to carry TWO versions of the fundamental light mode. The reason for this is that light may exist in two different polarizations, the modes of which are perpendicular to one another. The phenomenon is known from some sunglasses which cut away one of those polarization modes. Reflected sunlight from the sea surface or a wet road will predominantly consist of light in one of these polarization modes – whereas light reflected by other objects will consist of a mixture of the two polarization modes. Cutting away the polarization mode of the reflected light will “kill” the reflections, but let the other polarization mode pass through the glasses, leaving other objects visible.
In an optical fiber, the two polarization modes will both exist, but may travel at different speeds through the fiber. Such speed-differences will arise if the fiber core is not perfectly circular and if stress is present in the fiber. Stress can be “frozen” into the fiber during manufacturing if the fiber geometry is not absolutely perfect, for example, if the cladding or coating is not circular, or if the center of the core is different from the center of the cladding or coating.
Even using state of the art, high-quality manufacturing process, the fiber will not be geometrically 100% perfect, hence there will be a speed-difference between the two polarization modes, dispersion will result, and it may limit high speed transmission through the fiber. Even if the fiber were 100% perfect, the slight bending of the fiber in a cable would introduce stress in the fiber – creating PMD. So this is our third speed limiter.
Looking at a fiber from a “PMD-perspective” it may be thought of as having a “fast” and a “slow” lane. An effective way of reducing PMD is by twisting the fiber back and forth during manufacturing so that a high number of shifts between the “fast” and “slow” lanes are effectively seen by the light travelling through the fiber.
Because stress is an important cause of PMD, externally applied stress will also affect fiber PMD. In reality just holding a fiber between two fingers may change PMD. As a result, the PMD of a fiber may be affected both by the cabling of the fiber and by external stresses, for example vibrations from a nearby railroad.
As with other dispersion types the effect of PMD increases with increased transmission distance (PMD scales with the square root of the distance) and increased transmission speed. For transmission rates of 2.5 Gbps and smaller, PMD is normally not a problem. For very high transmission rate systems, the compensation of PMD is today made electronically and built into the transmission system.
The fiber optic cable world has come a long way over the past 30 years. Products have become more rugged and user friendly, making it easier for people to enter the industry and work handling optical fiber and cable. While this is great for the industry, many people may understand the “how to” but not necessarily the “why” of fiber optics. To understand the “why” behind fiber and cable products, the next step is to become a full-fledged “fiber geek.” Because the industry changes so quickly, this is an ongoing process. The purpose behind this series of articles is to enable the reader to understand some secondary fiber specifications and their importance to the network.
Once fiber is deployed, it’s very expensive to replace. For this reason, the fiber that’s installed should be capable of withstanding multiple generations of hardware while also having plenty of room for additional wavelength growth.
The graphic on the right highlights how wavelength usage has grown over the past three decades. For the first 30 years, applications were focused in the 1310 nm and 1550 nm regions. Given the explosive demand for bandwidth (even more so since COVID-19), it’s reasonable to assume that the next 30 years will require many more wavelengths, with potential applications across the entire optical spectrum.
The demand for bandwidth is expected to continue far into the future, driven in part by requirements for breakthrough applications such as higher resolution video, virtual reality and other applications. We expect this demand to continue to drive the need for optical spectrum provided by fiber. Fiber recommendations such as ITU-T G.652 and ITU-T G.657, are very important for network designers in setting minimum performance levels, but can ultimately be insufficient to meet the requirements for future networks. For this reason, performance beyond the standards can be very important.
This article will focus on critical optical parameters starting with attenuation, or loss in the fiber. Attenuation is a very important optical parameter, and there are many aspects to it. Additional articles in this series will focus on other optical parameters, including chromatic and polarization mode dispersion, splice loss, and an introduction to non-linear effects.
Keeping a low fiber attenuation has always been a focal point in fiber development – and today even more so with the widespread use of Coherent Transmission systems. These require large core and ultra-low loss attenuation fibers (typically ITU-T G.654 fiber types) for optimal performance of 100G and faster transmission systems.
Attenuation is typically measured in terms of optical dB. It is a logarithmic measurement where the Loss of a fiber equals 10*log (“Power at the- input side of the fiber” / “Power at the output side of the fiber”). Basically every 3 dB of loss corresponds to the optical power being cut in half. It is fair to assume, that the attenuation of a fiber is almost constant over the length of the fiber. So if a fiber loss is 0.25 dB/km, a total loss of 3 dB will be reached after 12 km of fiber has been passed by the optical signal in the fiber.
Looking at the different loss mechanisms in fibers, it may be helpful to distinguish between:
A): Attenuation caused by factors external to the fiber (as for example bending), and
B): Built in attenuation mechanisms.
Looking at B) first, there are two main loss mechanisms in optical fibers: Scattering and Absorption.
Also called “Rayleigh scattering”, even the best and purest, synthetic quartz glass (of which OFS fibers are made) is not 100% homogeneous. They consequently contain small fluctuations of glass density, which are frozen into the glass during manufacturing and may scatter the light when hit by a light ray (this is the same mechanism responsible for the blue color of the sky, when sunlight scatters off molecules in the atmosphere). Much of the light will continue traveling in the original direction, but a small part of the light will be scattered in all directions. Some light will propagate sideways out of the fiber, where – for transmission purposes – it will be lost. Some of it will actually be scattered backwards towards the sender. This is the phenomenon used by OTDR measurement devices to measure fiber attenuation, so the device only needs to be connected to one end of the fiber.
In optical fibers scattering is dominant at shorter wavelengths whereas the opposite is true for the other built in attenuation mechanism: Absorption (Figure 4).
Basically absorption happens when a light ray hits something – and gets converted into heat. So for practical purposes the light simply “disappears”.
Even extremely small impurities – down to a fraction of a micron – may absorb light, causing unwanted attenuation. It may be small particles – but it may also be impurities in the raw materials used for fiber manufacturing. This is why such extremely close attention is paid to the quality and purity of the raw materials used.
Due to the inherent material structure of glass, Absorption increases rather drastically at wavelengths longer than approximately 1550 nm (Figure 4)
Of particular interest over the years has been the hydroxyl (OH-) ion, which absorbs light around 1383 nm, giving rise to the so-called “water-peak” in the attenuation curve for the fiber (figure 5 – black curve). Being a by-product of the actual manufacturing process, it is difficult to fully avoid the presence of hydroxyl ions in the fiber, but it is possible to pacify the attenuation increase at the wavelengths close to 1383 nm. This is done by adding deuterium gas which interacts with the free bond of the hydroxyl ion thereby acting as a barrier securing excellent long-term Water Peak attenuation performance.
Conventional single-mode fibers meeting the G.652 recommendation may have a high Water Peak loss. This could limit the use of the fiber in some applications and may also make the fiber less useful in transmission systems using modern Raman amplification, where amplifier laser-pumps would typically operate 110 nm below the transmission signal wavelength.
OFS have fibers classified as Zero Water Peak (ZWP) with even better specified Water Peak performance than the so-called Low Water Peak (LWP) fibers. The long-term stability of ZWP fibers is excellent whereas for some types of ITU-T G.652 fibers the water peak attenuation might actually increase over their lifetime, slowly reducing the quality of the network.
Because of the optimized Water Peak performance, ZWP fibers serve the widest ranges of wavelengths and support the highest number of applications, as illustrated in Figure 1.
Figure 5 shows three different grades of ITU-T G.652 fiber, and how they may be performing in the water peak region around 1383 nm.
For the most part, scattering and absorption properties are locked into the fiber during manufacturing.
Bending, however, is another story…
Bending is a very important mechanism. As briefly mentioned, it is caused by factors external to the fiber and so both the cabling process and installation in the field can affect attenuation caused by bending.
To put it simply, what makes an optical fiber work is the use of different types of glass for the fiber core and for the glass surrounding the core (also known as the cladding). In this way, a sort of a tubular mirror surrounding the core is created. This is what keeps the light inside the fiber, using the concept of “total internal reflection” to guide the light. However, this mirror is not a perfect one. It only works if the light rays in the fiber run almost parallel to the core, and so if the fiber is bent (too) tightly (i.e. past the “critical angle” when reflection turns to refraction), light will leak out of the fiber causing loss – or attenuation.
This is called macro bending, where the diameter of the bending is larger than a few millimeters, which is what one would intuitively understand as “bending” the fiber.
Another type of bending is called micro bending. It concerns bending diameters smaller than 1 mm and could happen – as an example – if a fiber is squeezed between two sheets of sandpaper. Much more relevantly it may also happen if the fiber is being squeezed inside the cable construction (for example by the tubes containing the fibers) creating stress on the fiber. As loads/stresses increase, so does the loss.
Both types of bending loss cause attenuation increase, but it is possible to tell the two types of bending apart by considering the added loss at different wavelengths as illustrated in Figure 8.
Macrobending losses tend to be small at short wavelengths, but may increase rather dramatically at longer wavelengths.
Microbending losses are also typically present at short wavelengths, but the loss increase tends to be smaller than for macrobending at the longer wavelengths.
All of the trends in fiber deployment point to the increased importance of fiber bending performance.
Service providers constantly want to put more fibers into a smaller space which means that while buffer tube diameters keep shrinking, the fiber counts used in these buffer tubes keep increasing. This leads to a situation where there is less room for fibers to move before touching a buffer tube wall, thereby increasing the risk of microbends.
In addition, service providers primarily installed cables in either the outside plant, the inside of central offices, or into remote cabinets. Everywhere great care was taken to avoid small diameter bending. However, today’s fiber is going to places where it hasn’t gone before. It’s going inside our homes and businesses and also up poles and onto rooftops to feed cellular and Wi-Fi sites.
Tolerance to bending will be even more important in the future.
Micro and macro bends affect the network in ways that are not always obvious.
Bend-related losses are sometimes experienced in cold temperature environments. For this reason, fibers and cables should always be tested under low temperature conditions. As a network designer, it’s always a good idea to account for at least some optical margin for small potential attenuation increases in cold temperatures.
Especially very high-density designs may benefit from using bend insensitive fibers due to the unavoidable bends and lack of free space for fiber movement in the cable design itself.
While these issues are already important today, they will become even more important tomorrow. The reason is that next generation optical transmission protocols may typically use longer wavelengths than the existing protocols.
As highlighted earlier, longer wavelengths will often result in higher bending loss. Theoretically, a GPON network operating flawlessly today at 1490 nm – containing inadvertent bends – could have its reach reduced by almost half when it is upgraded to NG-PON2, operating at 1603 nm.
So a FTTH network installed today and working fine may not be suited for operation with future generation transmission equipment.
HELP IS ON THE WAY
In order to enable more compact cable constructions and allow for easier installation and perhaps even allow for the use of less experienced craftspeople for cable installation, quite a bit of attention has recently been focused on developing fibers with reduced sensitivity to bending i.e. those defined by the ITU-T Recommendation G.657.
G.657 specifies 4 different classes of fibers: “A1”, “A2”, “B2” and “B3”.
The “A” fibers are required to also fulfill (or to comply with) the specifications of the ITU-T G.652.D recommendation, whereas the “B” fibers may deviate from G.652.D on some parameters. The numbers (1, 2 or 3) signifies the fibers tolerance to bending – B3 fibers being most bending tolerant. Many “B3” fibers do today comply with G.652.D and should rightfully be labelled: “A3”, but such a class is not specified by ITU-T.
ITU-T G.657.A1 fibers are the closest to standard G.652.D fibers and may soon be the primary choice for the vast majority of fiber networks. OFS has combined G.657. A1 and G.652.D performance with a 9.2-micron mode field diameter.
G.657.A1 fibers with 9.2-micron mode field diameter perform the same way as standard G.652 fibers in terms of splicing – and can consequently be said to splice “seamlessly” to the huge base of already installed fibers. By offering the same splice performance as standard G.652 fibers, installation crews and quality inspectors will notice no change in performance and hence be given no cause for concern – even
though the advantages of better tolerance to bending will still be there.
These fibers are ideal for most of today’s typical short-distance (<1000 km) and low data rate (<400Gbps) applications, including standard outside plant (OSP) loose tube, ribbon, rollable ribbon, microduct cables, and drop cables.
ITU-T G.657.A2 fibers can be bent more tightly with lower loss. They are most commonly used in central office and cabinet environments, such as Fiber Distribution Hubs (FDH). These fibers are also commonly used in building backbone networks and as tails for various pre-terminated panels and other devices. In these environments, the fibers may need to be bent more tightly than in typical OSP cable applications.
The application spaces just mentioned for A1 and A2 fibers would typically involve one fiber to carry traffic for thousands of customers, meaning that a fiber break would affect the service to thousands of users. Here reliability is consequently paramount. In such situations A2 fibers (and A1 as well) offer the advantage of providing an “early warning” signal of increased attenuation whenever they are bent tightly enough to potentially cause reliability concerns. This is especially important for central office applications where one fiber could provide the feed for millions of customers.
ITU-T G.657.B3 fibers are the third main category of bend insensitive fibers. These fibers are designed and recommended for use in the drop portion of a Fiber-to-the- Home (FTTH) network serving a few customers per fiber. Homes and buildings with lots of tight spaces are very demanding places to deploy fiber. For optimized performance in such applications OFS has fiber which is designed and specified for use with bending radii as low as 2.5 mm which is significantly less than the minimum bending radius of 5 mm specified in the G.657.B3 recommendation.
OFS has fibers used in cables with a diameter of only 0.6 mm, enabling almost invisible in-house cable routing with a minimum of bending management. This avoids bulky and distasteful installation in private homes. For more demanding deployments, ruggedized cable designs with a diameter of 4.8 or 3mm may even be routed around corners and stapled using fast and easy installation practices, with negligible signal loss.
G.657 fibers which are not compliant with G.652.D are often assumed to have very small cores giving rise to significant additional splice losses when spliced to standard G.652.D fibers. However, that is not necessarily so. It is possible to get G.657.B3 fibers specified with an ultra-low bending radii of 2.5 mm and – whereas these fibers are not “seamless” fibers – they do in fact comply with the G.652.D recommendation in terms of core size. The only thing preventing such fibers from complying with G.652.D is the Chromatic Dispersion, and since they are primarily intended for in-building applications, the length will typically be much less than the 10 – 40 km fiber length in which the higher Chromatic Dispersion may typically start presenting problems.
Regarding bending loss however, the performance of such a fiber is significantly better. The loss for a single turn of 2.5 mm radius at 1550 nm for such a fiber is max. 0.2 dB – whereas the similar loss for a standard G.652.D fiber exceeds 30 dB.
Searching for an innovative fiber optic termination tool or kit? Then look no further: the FITEL EZ-Terminator tool is here.
The newest member of the FITEL Connectivity Solutions portfolio, the EZ-Terminator connector termination tool uses a simple, one-step operation and user-friendly interface to achieve the highest-quality terminations, quickly and under the most demanding conditions.
This handheld connector termination tool combines portability with a ruggedized body to provide the maximum accessibility and powerful performance needed for use in Multiple Dwelling Unit (MDU) and Single-Family Unit (SFU) installations. In addition, the EZ-Terminator tool’s industry-first, patented, removable V-groove allows easy cleaning and optical maintenance.
User-Friendly Design – The wide operation chamber offers easy optical fiber loading and connector assembly;
Simple Operation – The design allows one-touch operation and pre-installed programs for error-free SOC fiber termination projects;
Excellent Visibility – Three LED lights illuminate the entire operating chamber with more than 300 Lux. This intense bright light is critical to performing connector terminations in low-light environments.
Industry-First, Patented, Removable V-Groove – The industry’s only removable V-groove makes cleaning and optimal maintenance easy to achieve in only minutes and with no tools. This capability reduces downtime and supports optical performance.
Combined with a variety of EZ!Fuse™ SOC Components, the EZ-Terminator connector termination tool helps to save both time and money by delivering optical loss performance and yields that substantially surpass those of currently-available mechanical connectors. And, on top of this, the large battery capacity can achieve 100 termination/heating cycles on a single charge, providing installers with portability without sacrificing performance.
The EZ-Terminator connector termination tool’s simple, error-free operation and powerful, consistent performance make it a must-have for any fiber termination project where the highest-quality, repeatable results are critical.
Using optical fiber networks, people can access and share information at an amazing level. They can communicate, work and learn from virtually anywhere there’s an Internet connection. For people in rural communities that lack wireless or broadband services, their ability to obtain information is clearly unequal. Even getting a signal for a cellphone or laptop can mean driving miles to a more populated area. Life is much easier with an available high-speed optical fiber network.
Leveling the Playing Field
Implementing optical fiber helps to “level the playing field” by providing more equal access to information and opportunities for rural residents. In reality, optical fiber and wireless services can transform rural communities.
When optical fiber arrives, one obvious plus is being able to access a cell signal from home. That wireless service requires optical fiber, which acts as the nervous system of a network. Fiber to the Tower and Fiber to the Building lay the actual groundwork for wireless communications including LTE and 4G, and soon to come 5G. The benefits of this connectivity can be seen in three distinct areas as follows.
Digital revolution through high-speed optical fiber Internet helps medical facilities provide better treatment for patients in rural areas in a number of ways, including:
Physicians can search files, consult with specialists and use remote diagnostics and alternative healthcare delivery methods;
Healthcare professionals may use connected devices to directly monitor and care for patients;
Patients practice “self-care” by accessing health-related information on the Internet.
Teachers need optical fiber connectivity for video lectures and e-learning that can be widely shared. Students also need access to home Internet to complete homework and expand their learning. Colleges and universities require high-speed optical fiber Internet access to stay competitive and ensure their degree programs stay relevant.
Growth in Rural Communities
With 25% of rural residents lacking Internet access, fiber optic infrastructure build-outs are still needed. More people move into rural areas when they can maintain their standard of living. When optical fiber connectivity is optimal, existing or new businesses can reach and attract highly-qualified employees no matter where they live.
In rural areas where high-speed Internet is available, even small businesses and farms can benefit. The Internet of Things (IoT), another product of this digital revolution, makes Smart Farming possible. By applying sensing technologies through Smart Farming, farmers can practice more precise and scientific agriculture that results in increasingly bountiful, high-quality harvests.
Combining plenum-rated materials with OFS rollable ribbons creates a very compact, yet robust and fiber-dense cable. By featuring rollable ribbons, the latest OFS optical fiber technology, the R-Pack RR Backbone Cable offers twice the fiber density when compared to a traditional flat ribbon premises cable. The result is a reduced diameter, fiber-dense cable that helps customers to substantially improve fiber routing and save on space in congested pathways.
What are Rollable Ribbons?
To form rollable ribbons, 250 micron fibers are partially bonded to each other at intermittent points. Rollable ribbon cables offer the advantages of both loose fibers and traditional flat fiber ribbons in one fiber optic cable. These ribbons can be rolled and routed similarly to individual bare fibers and can also be spliced like traditional fiber ribbons. Rollable ribbons promote efficient and cost-effective mass fusion splicing while also offering easy breakout of individual fibers. These capabilities can help simplify cable installation, save on splicing time and costs and get a new data center or building deployment up and running quickly.
While the R-Pack RR Backbone Cable meets stringent Telcordia GR-409 standards for horizontal backbone applications, its plenum construction also meets NFPA 202 requirements for use in a number of demanding building applications, such as routing through ladder racking and raceways. This fiber optic cable can also be used in numerous other application spaces or even to construct assemblies.
As everyone uses more bandwidth than ever before, today’s networks require more optical fiber in less space. To help address this need, OFS introduced Fortex™ 2DT Fiber Optic Cable, the newest addition to the completely gel-free Fortex DT Cable product line.
Fiber Optic Cable: Getting Smaller and More Dense
Fortex 2DT Cable is the industry’s first fully Telcordia GR-20-rated, totally gel-free, loose tube fiber optic cable to feature 200 micron (µm) optical fiber. This fiber literally doubles the fiber count in the cable buffer tubes, significantly increasing fiber density. And, by using AllWave®+ 200 Micron ZWP Single-Mode Fiber, this fiber optic cable also offers more efficient use of network pathways.
Just as importantly, the Fortex 2DT Cable design reduces cable outer diameters by up to 18% and areas by up to 32%. This smaller outer cable diameter increases the efficient use of duct and subducts. Plus, cables with reduced outer diameters allow longer continuous cable reel lengths, which can result in fewer splices needed. In a deployment over long distances, less splicing can help create substantial cost savings.
Lighter is Better
The Fortex 2DT Cable is also lighter in weight. This lower weight can help to reduce cable pulling tensions which can increase cable pulling lengths. These increased pulling lengths can, in turn, help to save on installation time and costs. For aerial deployments, a lighter-weight cable can also decrease the loads on poles.
A Fiber Optic Cable Design for Your Application
The Fortex 2DT Cable product line features single jacket, light armor and armored cable options. These cables are available with up to 288 fibers in Telcordia GR-20 Issue 4 compliant cable designs. While the single jacket cable is an excellehttps://fiber-optic-catalog.ofsoptics.com/item/outdoor-fiber-optic-cables/loose-tube-fiber-optic-cables-1/fortex-2dt-single-jacket-cablent choice for duct, lashed aerial and general outside plant (OSP) installations, the light armor and armored cables feature a layer of rugged electrolytically chrome-coated steel (ECCS) armor. The armored cable also includes an inner polyethylene (PE) jacket. With these added features, the light armor and armored cables offer extra durable crush resistance for more demanding OSP applications, including direct buried in challenging environments.
A huge increase in digital devices, cloud computing and web services have helped fuel the tremendous demand for increased bandwidth while also pushing datacom rates to 100G and beyond. With these faster speeds and greater use, system designers might assume that single-mode optical fiber holds a growing advantage over multimode optical fiber for premises applications. However, it’s critical to remember that increased Ethernet speeds don’t necessarily mean that single-mode fiber is the best choice.
While it’s true that single-mode fiber holds bandwidth and reach advantages, especially for longer distances, multimode fiber easily supports most distances needed by data center and enterprise networks, and at a significant cost savings over single-mode fiber.
What’s the Difference?
These two optical fiber types were primarily named for the different ways that they transmit light. Single-Mode optical fibers have a small core size (less than 10 microns) and allow only one mode or ray of light to be transmitted. These fibers were mainly designed for networks that involve medium to long distances, such as metro, access and long-haul networks.
On the other hand, multimode fibers have larger cores that work to guide many modes at the same time. These larger cores make it much easier to capture light from a transceiver, helping to control source costs.
Today, network designers and end users can choose from OM3, OM4 or OM5 grades of 50 micron multimode fibers. At one time in the 1980s, as data rates increased, 62.5 micron fiber was introduced because it allowed for longer reach to support campus applications. However, with the advent of gigabit speeds, users moved back to 50 micron fiber with its inherently higher bandwidth. Now 50 micron laser-optimized multimode OM3,OM4 and OM5 fibers offer major bandwidth and reach advantages for short-reach applications along with low system costs.
Industry standards groups such as IEEE (Ethernet), TIA, ISO/IEC and others continue to recognize multimode optical fiber as the short-reach solution for next-generation speeds. In fact, TIA issued a new standard for the next generation of multimode fiber called wide band (OM5) multimode fiber. This new version of 50 micron fiber can transmit multiple wavelengths using Short Wavelength Division Multiplexing (SWDM) technology, while maintaining OM4 backward compatibility. This capability lets end users gain greater bandwidth and higher speeds from a single fiber by simply adding wavelengths. The OFS version of this fiber is LaserWave® WideBand (OM5) Optical Fiber.
Generally, 50 micron optical fiber continues to be the most cost-effective choice for enterprise and data center use up to the 500-600 meter range. Beyond that distance, single-mode optical fiber is necessary.
The OFS LaserWave FLEX Multimode Optical Fiber family offers full performance range and has better optical and geometric specifications than standards require. However, if the network’s transmission distance requires the use of single-mode optical fiber, consider bend-insensitive, zero water peak (ZWP) full-spectrum fibers such as the OFS family of AllWave® Optical Fibers.
Many land and oceanic oil operations use temperature sensing to help improve safety and functionality in harsh environments. Optical fibers used in these conditions are routinely exposed to high temperatures and pressures, along with ionizing radiation and aggressive chemicals in the surroundings.
While researchers have thoroughly studied the mechanical strength of optical fibers under ambient conditions, they have rarely examined fibers after exposure to elevated temperatures and/or liquids. In fact, to the best of our knowledge, there is no systematic data documenting the mechanical strength of optical fibers placed under high temperatures and pressures such as those experienced in temperature sensing.
That’s why when Andrei Stolov of OFS decided to perform an experimental study, he was operating in somewhat “unknown territory.” Before beginning the experiment, Stolov realized that a number of factors would influence whether optical fibers could survive the harsh conditions found in oil operations. These aspects include the type of fiber coating, environment, temperature, pressure and usage time.
When optical fibers are used at elevated temperatures or in aggressive environments, the most frequent indications of failure are added attenuation or loss of mechanical strength. In Stolov’s study, he used strength degradation as his criteria for failure.
In his experiment, Stolov submerged a range of optical fibers with various coatings into four high-temperature/high-pressure fluids, namely (1) distilled water; (2) sea water; (3) isopropyl alcohol (IPA); and (4) paraffin oil. Undersea and downhole applications primarily drove his choice of fluids. In these situations, fibers can be exposed to these or similar environments.
To learn more about the study and the results, please go HERE.