Custom Search



Oil Reserves

Global Oil Reserves Per Region - since 1980
Oil Reserves Middle East region - since 1980
Oil Reserves Asia Pacific region - since 1980
Oil Reserves South America region - since 1980
Oil Reserves Africa region - since 1980
Oil Reserves North America region - since 1980
Oil Reserves Europe region (including Russia) - since 1980
Proven Oil Reserves - in major countries by reserves size at end 2007
Proven Reserves - major countries (top 20) by reserves size
Oil Production Increase - per region in 2006 compared to 2005 - in barrels per day - note remarkable stability as a percentage when considering the total world oil prdoction of ca. 85 million barrels per day

Oil Depletion Rates - ranked by size of reserves - the highest reserves are to the left and lowest to the right - in major countries by reserves countries - as of end 2007. UK has highest depletion rate of 17% with Saudi Arabia, Iran, Iraq and Kuwait the lowest at 1-2% per annum (implying ca. 50 to 100 years of production at current rates).
Oil Revenue Per Person per Country in 2008 - in major countries by reserves size. Note at ca. $105/bbl average in 2008, Kuwait delivered a sttaggering $33,500 per person. Brunei follws with $25,000 then UAE and Norway - at around $20,000 per person. These are some of the richest countries in the world if oil prices rise above ca. $50/bbl.
Oil Revenue per Country in 2008 - in major oil producing countries. Note Saudi Arabia and Russia head the list with production at ca. 9.5 million barrels a day. Both countries export most of their oil. USA is third but is a large net importer of ca. 8 million barrels a day.
Oil Revenue per Smaller Oil Producting Country in 2008 - in major oil producing countries (note: gross produced revenue, not net or net exported revenue or profit). Based on 2008 oil price and 2007 oil production.
Oil Production per Country in 2007 - in major oil producing countries. Note Russia and Saudi Arabia are the biggest oil producers. Followed by USA. However, USA importa ca. 8 million barrels a day, whilst Saudi exports the bulk of it's oil.
Gross Oil Sales Per Region - 1972 to present, and forecast to 2015 based on the below oil price forecast. This chart shows the total region's oil production multiplied by the average oil price in that year (benchmarked to Dubai price). Note - the price forecast is uncertain, but with prices crashing from $147/bbl to $33/bbl from July 2008 to Dec 2008, we expect prices to recover as the recession ends - thence tightening of supply and increase in demand to drive up prices end 2009.
Cost of Oil Per Region - 1972 to present, and forecast to 2015 based on the below oil price forecast. This chart shows the total region's oil cost - this is the oil consumption multiplied by the average oil price in that year (benchmarked to Dubai price). Note - the price forecast is uncertain, but with prices crashing from $147/bbl to $33/bbl from July 2008 to Dec 2008, we expect prices to recover as the recession ends - thence tightening of supply and increase in demand to drive up prices end 2009.
Net Oil Exports and Imports Per Region - 1972 to present, and forecast to 2015. Note the massive Fast Eastern draw on oil in years to come. Europe region is only slightly negative only because Russia is added into this region. Middle East are the clear exporters - with Africa another net exporter (85% from Libya, Algeria, Nigeria and Angola).
Global Oil Cost Deficits and Surplus Per Region - 1972 to present, and forecast to 2015. Using historical oil prices before end 2008, and oil price forecast below up until 2015. Note recession impacts 2009 and 2010 - as Middle East cuts back oil and the oil price crashes, before recovering end of decade.
Global Oil Cost Deficits and Surplus Per Region - 2000 to present, and forecast to 2015. Using historical oil prices before end 2008, and oil price forecast below up until 2015. Note recession impacts 2009 and 2010 - as Middle East cuts back oil and the oil price crashes, before recovering end of decade.

Oil Producing in Smaller Producing Country in 2007

Oil Production - Major Percentage Increase and decrease - 2006 compared to 2005. Note Angola is the world biggest % increaser in oil production. This is likely to continue in 2008 and possible 2009. Chad is in decline as is UK and Turkmenistan - this is also likely to continue in 2008 and 2009. However, OPEC (mainly Middle Eastern) countries will cut back production end 2008 and early 2009.

Vicious Viruses or Artifficial Intelligence


 Our computers were stricken by one of the most vicious and ingenious strain of virus/malware/Trojan system I have ever seen. It was a multi stage and multi component system composed of a timer based installer, a downloader that contains another downloader that in turn installs viruses that install other viruses. In addition this system operates in worm mode and installs itself on any other computer in the network exploiting the latest zero day security holes. In addition it uninstalls the antivirus on the computer and disables the firewall and the automatic updates, and preventing the installation of antiviruses. Because of the way it works the antivirus software took close to four days to discovers all the hidden features, slipping back between safe mode, offline updates and other methods of removing the virus. I am still unsure that it is behind me.

One of the most logical question would be: What do these operators and programmers stand to gain from designing a system on such a huge scale? The surprising answer is hundreds of millions of dollars, perhaps even billions! Over the last few years I have been working alongside with some of the major Internet marketing experts, which gave me insight to the problem. One of the key components in any successful internet marketing strategy is affiliate programs. These programs allow the vendor to expand the services to customers and add value to the site.

As an example, suppose we have a website that helps people find movers in their area. suppose you visit the above site and click on boxes. The boxes are supplied by a third party. Most affiliate programs have a vendor ID that is assigned to the link and a cookie to help track that the user indeed was referred by the vendor. This allows the user to leave the site. If on a subsequent visit the user decides to buy a product or service, the vendor would be granted a percentage of the sale. It use to be common practice to install a frame called a "invisible popup" in the page to mark the client and pay money to perpetrators, but newer browsers block these attempts.

I think you now all understand why these malware/viruses are such a lucrative business. By opening up as many sites as possible before being removed you are actually tagging these sites so if you purchase from any of these sites within three months you are paying the pirates a large percentage of the sale. For example if a Trojan popped up Amazon, then every purchase will pocket at least 4% of the profits (see this for example)

A couple of years ago I turned down a very high paying job offer to develop such malware for a company that makes well over 100M in profits every year from this scam. This is only one in a list of hundreds of companies worldwide that exist to exploit these security problems.

Unlike battling pirates in a million square mile area, government can battle the virus operators by the racketeering laws to follow the money trail and shut these operations down. They can investigate and expose the operators by infecting computers with the viruses and see what affiliate programs are tagged. Then they can hold payments by the affiliate programs. I think that these measures would be far more effective that trying to nail a specific individual. By drying up the pond, you get rid of the mosquitoes.

It would appear that I am in the business of making viruses - that is not the case - I went to a job interview and when I did a background check of what the company does that is what I discovered. I decided that morally I cant be associated with such a company

GM, DaimlerChrysler, BMW Premiere Unprecedented Hybrid Technology



The state-of-the-art full hybrid system, whose components are being co-developed by General Motors Corp., DaimlerChrysler and the BMW Group for production beginning next year, represents a major automotive industry milestone due to the unprecedented fully integrated combination of electric motors with a fixed-gear transmission.

As a result of its low- and high-speed electric continuously variable transmission (ECVT) modes, the system is commonly referred to as the 2-mode hybrid. However, the sophisticated fuel-saving system also incorporates four fixed gear ratios for high efficiency and power-handling capabilities in a broad variety of vehicle applications. During the two ECVT modes and four fixed gear operations, the hybrid system can use the electric motors for boosting and regenerative braking.

In summary, the four fixed gears overlay two ECVT modes for a total of six operating functions:

  • Input-split ECVT mode, or continuously variable Mode 1, operates from vehicle launch through the second fixed gear ratio.
  • Compound-split ECVT mode, or continuously variable Mode 2, operates after the second fixed gear ratio.
  • First fixed-gear ratio with both electric motors available to boost the internal combustion engine or capture and store energy from regenerative braking, deceleration and coasting.
  • Second fixed-gear ratio with one electric motor available for boost/braking,
  • Third fixed-gear ratio with two electric motors available for boost/braking.
  • Fourth fixed-gear ratio with one electric motor available for boost/braking.

The result is trend-setting hybrid technology that provides superior fuel economy, performance and load carrying capability.

The full hybrid system being co-developed by General Motors, DaimlerChrysler and the BMW Group has an overall mechanical content and size similar to a conventional automatic transmission, yet this full hybrid transmission can operate in infinitely variable gear ratios or one of the four fixed-gear ratios.

A sophisticated electronic control module constantly optimizes the entire hybrid powertrain system to select the most efficient operation point for the power level demanded by the driver.

Key Advantages

When compared to conventional hybrid systems, this avant-garde hybrid technology, relying on both the ECVT modes and the four fixed gear ratios, provides advantages in combined (city and highway) fuel economy, dynamics and towing capability.

Traditional hybrid systems typically have only one torque-splitting arrangement and no fixed mechanical ratios. These systems are often called “one-mode” hybrids. Due to their less capable mechanical content, one-mode hybrids need to transmit a significant amount of power through an electrical path that is 20 percent less efficient than a mechanical path. This requires usually substantial compromise in vehicle capability or reliance on larger electrical motors, which can create cost, weight and packaging issues.

General Motors, DaimlerChrysler and the BMW Group have conceived a full hybrid system featuring four fixed mechanical ratios, within the two ECVT modes, to reduce power transmission through the less efficient electrical path. Consequently, the electric motors are more compact and less dependent on engine size.

This combination of two ECVT modes and four fixed gear ratios eliminates the drawbacks of one-mode hybrid systems to allow for efficient operation throughout a vehicle’s operating range, at low and high speeds. It also allows for application across a broader variety of vehicles. It is particularly beneficial in demanding applications that require larger engines, such as towing, hill climbing or carrying heavy loads.

Existing internal combustion engines can be used with relatively minimal alteration because the full hybrid system imposes no significant limitation on the size or type of engine. It enables the three global automakers to package internal combustion engines with the full hybrid transmissions more cost-effectively and offer the fuel-saving technology across a wider range of vehicles.

Initial applications are suitable for front-engine, rear- and four-wheel-drive vehicle architectures, but the full hybrid system has the flexibility to be used in front-engine, front-wheel-drive architectures in the future as well.

Global Hybrid Cooperation

General Motors, DaimlerChrysler and the BMW Group have formed a cooperative effort called the Global Hybrid Cooperation, which is actively developing this next generation hybrid powertrain system. In an alliance of equals, all three partners are pooling expertise and resources to jointly and efficiently develop hybrid technology. Each company will individually integrate the full hybrid system into the design and manufacturing of vehicles in accordance with their brand specific requirements.

In Troy, Michigan, the “GM, DaimlerChrysler and BMW Hybrid Development Center” houses together engineers and specialists from all three companies to develop the complete hybrid system and the individual components -- electric motors, high-performance electronics, wiring, energy management, and hybrid system control units. In addition, the “GM, DaimlerChrysler and BMW Hybrid Development Center” will be responsible for system integration and project management.

A key factor in ensuring optimum development is the focus on a flexible system design that can be scaled to the size, mass and performance needs of the various vehicle concepts and brands. The extensive sharing of components and the collaborative relationship with suppliers will enable the alliance partners to achieve economies of scale and associated cost advantages that will also benefit customers. Currently full hybrid systems are under development for front- and rear-wheel-drive passenger cars, and light-duty truck and SUV applications.

General Motors Corp. , the world’s largest automaker, has been the global industry sales leader for 75 years. Founded in 1908, GM today employs about 327,000 people around the world. With global headquarters in Detroit, GM manufactures its cars and trucks in 33 countries. In 2005, 9.17 million GM cars and trucks were sold globally under the following brands: Buick, Cadillac, Chevrolet, GMC, GM Daewoo, Holden, HUMMER, Opel, Pontiac, Saab, Saturn and Vauxhall. GM operates one of the world’s leading finance companies, GMAC Financial Services, which offers automotive, residential and commercial financing and insurance. GM’s OnStar subsidiary is the industry leader in vehicle safety, security and information services. More information on GM can be found at www.gm.com.

DaimlerChrysler’s product portfolio ranges from small cars to sports cars and luxury sedans; and from versatile vans to heavy duty trucks or comfortable coaches. DaimlerChrysler’s passenger car brands include Maybach, Mercedes-Benz, Chrysler, Jeep®, Dodge and smart. Commercial vehicle brands include Mercedes-Benz, Freightliner, Sterling, Western Star, Setra, Mitsubishi Fuso, Thomas Built Buses and Orion. DaimlerChrysler’s strategy rests on four pillars: Excellent products offering outstanding customer value, leading brands, innovations and technology leadership and global presence and networking. With 382,724 employees, DaimlerChrysler achieved revenues of €149.8 billion in 2005.

The BMW Group covers the BMW, MINI and Rolls-Royce brands. The BMW Group is the only automobile company worldwide to operate with all its brands exclusively in the premium segments of the automobile market, from the small car to the absolute top segment. The BMW Group's vehicles provide outstanding product substance in terms of aesthetic design, dynamic performance, cutting-edge technology and quality, underlining the company's leadership in technology and innovation. Today, with revenues of 46.7 billion euro, annual sales of 1.328 million automobiles (thereof 200,000 MINIs), 97,500 BMW motorcycles and with almost 106,000 associates, the BMW Group is one of the world's ten largest automobile manufacturers.

America’s fastest train moves ahead Funding boost propels maglev, which still faces headwinds

Image: Maglev
General Atomics
The magnetic levitating train, or "maglev," can travel at up to 310 mph, and could compete with commercial airplanes, which cruise at about 550 mph.


a
Could America’s fastest train whisk us away from $4-a-gallon gas guzzlers?

Thanks to a $45 million infusion from a transportation bill signed by President Bush in early June, there could someday be a magnetic levitating train, or “maglev,” soaring from Disneyland to Las Vegas at a maximum speed of 310 mph — 180 mph on average.

After the research phase is complete in about three years, the private partnership behind the effort, American Magline Group, comes to its biggest crossroads: obtaining $12 billion in funding for construction.

“If we had the money tomorrow, we’d build it in five years,” he said.

What’s slowing down America’s fastest train, however, is the hefty cost of crafting the infrastructure — including the guideway — from scratch, because the fastest train can’t run on ordinary steel tracks. The $45 million from the federal government will only cover pre-construction obligations, including environmental testing in the Mojave Desert, where the line would be laid.

But as spiking gas prices and traffic pinch both nerves and wallets, and flight costs and delays hamper air travel, the maglev joins the list of alternatives to the nation’s transportation tribulations.

“With our gasoline prices and everything else going on, people and government are ready to make a commitment,” Cummings said. Until now, “we haven’t committed to high-speed trains in this country — at all.”

America’s fastest train could compete with air travel. Flying from Anaheim, Calif., to Vegas on a passenger jet cruising at about 550 mph can cost upward of $150, while a ticket for the same route on a maglev would cost $55, according to the American Magline Group.

Plus, the maglev doesn’t pollute. It’s energy efficient. And it's low-maintenance because the train levitates — thanks to magnets — avoiding wear-and-tear on the underlying “guideway.” That’s what propels the vehicle through a magnetic field established by the electrical grid. Upping the current accelerates the train. Lowering the current slows the train. And reversing the current stops or pushes the train backwards.

Paul Saffo, a Silicon Valley technology forecaster, said the lure of Las Vegas might just be what it takes to turn a profit with the maglev. The world’s first maglev, a 19-mile line in Shanghai, China, doesn’t garner enough traffic to offset the initial investment. But there are plans to extend the route, with hopes of attracting enough riders to reach critical mass.

While the California-Nevada maglev has scored the most federal funding to date, two other lines, from Pittsburgh International Airport to downtown and from Baltimore to D.C., are also competing for federal dollars for construction.

The two East Coast lines make sense to many, and Cummings of the American Magline Group said the Disneyland-to-Vegas line is more than what critics have dismissed it as: a “gamblers’ express.”

Is America's fastest train a practical solution for our transportation woes?

While the Western line aims to relieve traffic on the congested Interstate 15 highway which leads to “Sin City,” the first two segments, which would connect Las Vegas to Primm, Nev., and Orange County to Ontario, Calif., could shoot commuters to work and back home, he said.

“It’s an exciting alternative if you want to live in the suburbs, outside the main city,” Cummings said.

Not everyone is convinced a Maglev is a good idea. The Federal Railway Administration argues transportation dollars should aid America’s current public transportation system. The railway administration, which asked for $100 million in funding from the federal government, only received $30 million under the recent transportation bill.

“It’s great to have a train that goes 200 mph from Disneyland to Las Vegas, but that money could improve things in Los Angeles, Chicago, Miami, New York ... Seattle,” said FRA spokesman Steve Kulm.

New and Improved Antimatter Spaceship for Mars Missions

Most self-respecting starships in science fiction stories use antimatter as fuel for a good reason – it’s the most potent fuel known. While tons of chemical fuel are needed to propel a human mission to Mars, just tens of milligrams of antimatter will do (a milligram is about one-thousandth the weight of a piece of the original M&M candy).

Nuclear-thermal rocket designImage right: A spacecraft powered by a positron reactor would resemble this artist's concept of the Mars Reference Mission spacecraft. Credit: NASA

However, in reality this power comes with a price. Some antimatter reactions produce blasts of high energy gamma rays. Gamma rays are like X-rays on steroids. They penetrate matter and break apart molecules in cells, so they are not healthy to be around. High-energy gamma rays can also make the engines radioactive by fragmenting atoms of the engine material.

The NASA Institute for Advanced Concepts (NIAC) is funding a team of researchers working on a new design for an antimatter-powered spaceship that avoids this nasty side effect by producing gamma rays with much lower energy.

Antimatter is sometimes called the mirror image of normal matter because while it looks just like ordinary matter, some properties are reversed. For example, normal electrons, the familiar particles that carry electric current in everything from cell phones to plasma TVs, have a negative electric charge. Anti-electrons have a positive charge, so scientists dubbed them "positrons".

When antimatter meets matter, both annihilate in a flash of energy. This complete conversion to energy is what makes antimatter so powerful. Even the nuclear reactions that power atomic bombs come in a distant second, with only about three percent of their mass converted to energy.

Previous antimatter-powered spaceship designs employed antiprotons, which produce high-energy gamma rays when they annihilate. The new design will use positrons, which make gamma rays with about 400 times less energy.

The NIAC research is a preliminary study to see if the idea is feasible. If it looks promising, and funds are available to successfully develop the technology, a positron-powered spaceship would have a couple advantages over the existing plans for a human mission to Mars, called the Mars Reference Mission.

diagram of positron rocketImage left: A diagram of a rocket powered by a positron reactor. Positrons are directed from the storage unit to the attenuating matrix, where they interact with the material and release heat. Liquid hydrogen (H2) circulates through the attenuating matrix and picks up the heat. The hydrogen then flows to the nozzle exit (bell-shaped area in yellow and blue), where it expands into space, producing thrust. Print-resolution copy Credit: Positronics Research, LLC

"The most significant advantage is more safety," said Dr. Gerald Smith of Positronics Research, LLC, in Santa Fe, New Mexico. The current Reference Mission calls for a nuclear reactor to propel the spaceship to Mars. This is desirable because nuclear propulsion reduces travel time to Mars, increasing safety for the crew by reducing their exposure to cosmic rays. Also, a chemically-powered spacecraft weighs much more and costs a lot more to launch. The reactor also provides ample power for the three-year mission. But nuclear reactors are complex, so more things could potentially go wrong during the mission. "However, the positron reactor offers the same advantages but is relatively simple," said Smith, lead researcher for the NIAC study.

Also, nuclear reactors are radioactive even after their fuel is used up. After the ship arrives at Mars, Reference Mission plans are to direct the reactor into an orbit that will not encounter Earth for at least a million years, when the residual radiation will be reduced to safe levels. However, there is no leftover radiation in a positron reactor after the fuel is used up, so there is no safety concern if the spent positron reactor should accidentally re-enter Earth's atmosphere, according to the team.

It will be safer to launch as well. If a rocket carrying a nuclear reactor explodes, it could release radioactive particles into the atmosphere. "Our positron spacecraft would release a flash of gamma-rays if it exploded, but the gamma rays would be gone in an instant. There would be no radioactive particles to drift on the wind. The flash would also be confined to a relatively small area. The danger zone would be about a kilometer (about a half-mile) around the spacecraft. An ordinary large chemically-powered rocket has a danger zone of about the same size, due to the big fireball that would result from its explosion," said Smith.

Another significant advantage is speed. The Reference Mission spacecraft would take astronauts to Mars in about 180 days. "Our advanced designs, like the gas core and the ablative engine concepts, could take astronauts to Mars in half that time, and perhaps even in as little as 45 days," said Kirby Meyer, an engineer with Positronics Research on the study.

Advanced engines do this by running hot, which increases their efficiency or "specific impulse" (Isp). Isp is the "miles per gallon" of rocketry: the higher the Isp, the faster you can go before you use up your fuel supply. The best chemical rockets, like NASA's Space Shuttle main engine, max out at around 450 seconds, which means a pound of fuel will produce a pound of thrust for 450 seconds. A nuclear or positron reactor can make over 900 seconds. The ablative engine, which slowly vaporizes itself to produce thrust, could go as high as 5,000 seconds.

positron ablation rocketImage right: This is an artist's concept of an advanced positron rocket engine, called an ablative engine. This engine produces thrust when material in the nozzle is vaporized (ablated). In the image, the engine emits blue-white exhaust as thin layers of material are vaporized by positrons in tiny capsules surrounded by lead. The capsules are shot into the nozzle compartment many times per second. Once in the nozzle compartment, the positrons are allowed to interact with the capsule, releasing gamma rays. The lead absorbs the gamma rays and radiates lower-energy X-rays, which vaporize the nozzle material. This complication is necessary because X-rays are more efficiently absorbed by the nozzle material than gamma rays would be. Credit: Positronics Research, LLC

One technical challenge to making a positron spacecraft a reality is the cost to produce the positrons. Because of its spectacular effect on normal matter, there is not a lot of antimatter sitting around. In space, it is created in collisions of high-speed particles called cosmic rays. On Earth, it has to be created in particle accelerators, immense machines that smash atoms together. The machines are normally used to discover how the universe works on a deep, fundamental level, but they can be harnessed as antimatter factories.

"A rough estimate to produce the 10 milligrams of positrons needed for a human Mars mission is about 250 million dollars using technology that is currently under development," said Smith. This cost might seem high, but it has to be considered against the extra cost to launch a heavier chemical rocket (current launch costs are about $10,000 per pound) or the cost to fuel and make safe a nuclear reactor. "Based on the experience with nuclear technology, it seems reasonable to expect positron production cost to go down with more research," added Smith.

Another challenge is storing enough positrons in a small space. Because they annihilate normal matter, you can't just stuff them in a bottle. Instead, they have to be contained with electric and magnetic fields. "We feel confident that with a dedicated research and development program, these challenges can be overcome," said Smith.

If this is so, perhaps the first humans to reach Mars will arrive in spaceships powered by the same source that fired starships across the universes of our science fiction dreams.

Bill Steigerwald
NASA Goddard Space Flight Center

giant bug eye satellite camera could capture an entire city

Satellite imagery has become part of our everyday lives through applications likeGoogle Maps. However, the current technology involves capturing tons of high-resolution images and stitching them together to form one larger image. This not only creates a huge amount of work to precisely align these images, it also leaves live-action surveillance susceptible to drop-outs as subjects move between cameras (yeah, I’ve seen 24 too).

Satellite Lens Array

It turns out that a team from Sony and the University of Alabama are working on an imaging system that can capture a huge area with a single camera. The imaging system would actually be built up from a large array of light-sensitive chips, all placed at in the focal plane of a large multiple lens system. The end result doesn’t look that much different than the complex eye of an insect.

One major advantage of a single camera approach is that near real time images could be transmitted to ground personnel, without the overhead of joining multiple images together. Also, this approach would allow for recording sequential images (the current design could support a rate of up to 4 frames per second).

According to the team’s recently published patent application, the camera could image an area of up to 10 square kilometers from a 7.5 kilometer altitude. The camera’s gigapixel resolution would allow it to capture images at a precision of up to 50 centimeters per pixel from that height.

Kerosene

Background

Kerosene is an oil distillate commonly used as a fuel or solvent. It is a thin, clear liquid consisting of a mixture of hydrocarbons that boil between 302°F and 527°F (150°C and 275°C). While kerosene can be extracted from coal, oil shale, and wood, it is primarily derived from refined petroleum. Before electric lights became popular, kerosene was widely used in oil lamps and was one of the most important refinery products. Today kerosene is primarily used as a heating oil, as fuel in jet engines, and as a solvent for insecticide sprays.

History

Petroleum byproducts have been used since ancient times as adhesives and water proofing agents. Over 2,000 years ago, Arabian scientists explored ways to distill petroleum into individual components that could be used for specialized purposes. As new uses were discovered, demand for petroleum increased. Kerosene was discovered in 1853 by Abraham Gesner. A British physician, Gesner developed a process to extract the inflammable liquid from asphalt, a waxy petroleum mixture. The term kerosene is, in fact, derived from the Greek word for wax. Sometimes spelled kerosine or kerosiene, it is also called coal oil because of its asphalt origins.

Kerosene was an important commodity in the days before electric lighting and it was the first material to be chemically extracted on a large commercial scale. Mass refinement of kerosene and other petroleum products actually began in 1859 when oil was discovered in the United States. An entire industry evolved to develop oil drilling and purification techniques. Kerosene continued to be the most important refinery product throughout the late 1890s and early 1900s. It was surpassed by gasoline in the 1920s with the increasing popularity of the internal combustion engine. Other uses were found for kerosene after the demise of oil lamps, and today it is primarily used in residential heating and as a fuel additive. In the late 1990s, annual production of kerosene had grown to approximately 1 billion gal (3.8 billion 1) in the United States alone.

Raw Materials

Kerosene is extracted from a mixture of petroleum chemicals found deep within the earth. This mixture consists of oil, rocks, water, and other contaminates in subterranean reservoirs made of porous layers of sandstone and carbonate rock. The oil itself is derived from decayed organisms that were buried along with the sediments of early geological eras. Over tens of millions of years, this organic residue was converted to petroleum by a pair of complex chemical processes known as diagenesis and catagensis. Diagenesis, which occurs below 122°F (50°C), involves both microbial activity and chemical reactions such as dehydration, condensation, cyclization, and polymerization. Catagenesis occurs between 122°F and 392°F (50°C and 200°C) and involves thermocatalytic cracking, decarboxylation, and hydrogen disproportionation. The combination of these complex reactions creates the hydrocarbon mixture known as petroleum.

The Manufacturing
Process

Crude oil recovery

  • 1 The first step in the manufacture of kerosene is to collect the crude oil. Most oil supplies are buried deep beneath the earth and there are three primary types of drilling operations used to bring it to the surface. One method, Cable-Tooled Drilling, involves using a jackhammer chisel to dislodge rock and dirt to create a tunnel to reach oil deposits that reside just below the earth's surface. A second process, Rotary Drilling, is used to reach oil reservoirs that are much deeper underground. This process requires sinking a drill pipe with a rotating steel bit into the ground. This rotary drill spins rapidly to pulverize earth and rock. The third drilling process is Off Shore Drilling and it uses a large ocean borne platform to lower a shaft to the ocean floor.
  • 2 When any of these drilling processes break into an underground reservoir, a geyser erupts as dissolved hydrocarbon gases push the crude oil to the surface. These gases will force about 20% of the oil out of the well. Water is then pumped into the well to flush more of the oil out. This flushing process will recover about 50% of the buried oil. By adding a surfactant to the water even more oil can be recovered. However, even with the most rigorous flushing it is still impossible to remove 100% of the oil trapped underground. The crude oil recovered is pumped into large storage tanks and transported to a refining site.
  • 3 After the oil is collected, gross contaminants such as gases, water, and dirt are removed. Desalting is one cleansing operation that can be performed both in the oilfield and at the refinery site. After the oil has been washed, the water is separated from the oil. The properties of the crude oil are evaluated to determine which petroleum products can best be extracted from it. The key properties of interest include density, sulfur content, and other physical properties of the oil related to its carbon chain distribution. Since crude oil is a combination of many different hydrocarbon materials that are miscible in one another, it must be separated into its components before it can be turned into kerosene.

Separation

  • 4 Distillation is one type of separation process involves heating the crude oil to separate its components. In this process the stream of oil is pumped into the bottom of a distillation column where it is heated. The lighter hydrocarbon components in the mixture rise to the top of the column and most of the high boiling-point fractions are left at the bottom. At the top of the column these lighter vapors reach the condenser which cools them and returns them to a liquid state. The columns used to separate lighter oils are proportionally tall and thin (up to 116 ft [35 m] tall) because they only require atmospheric pressure. Tall distillation columns can more efficiently separate hydrocarbon mixtures because they allow more time for the high boiling compounds to condense before they reach the top of the column.

    To separate some of the heavier fractions of oil, distillations columns must be operated at approximately one tenth of atmospheric pressure (75 mm Hg). These vacuum columns are structured to be very wide and short to help control pressure fluctuations. They can be over 40 ft (12 m) in diameter.

  • 5 The condensed liquid fractions can be collected separately. The fraction that is collected between 302°F and 482°F (150°C and 250°C) is kerosene. By comparison, gasoline is distilled between 86°F and 410°F (30°C and 210°C). By recycling the distilled kerosene through the column multiple times its purity can be increased. This recycling process is known as refluxing.

Purification

  • 6 Once the oil has been distilled into its fractions, further processing in a series of chemical reactors is necessary to create kerosene. Catalytic reforming, akylkation, catalytic cracking, and hydroprocessing are four of the major processing techniques used in the conversion of kerosene. These reactions are used to control the carbon chain distribution by adding or removing carbon atoms from the hydrocarbon backbone. These reaction processes involve transferring the crude oil fraction into a separate vessel where it is chemically converted to kerosene.
  • 7 Once the kerosene has been reacted, additional extraction is required to remove secondary contaminants that can affect the oil's burning properties. Aromatic compounds, which are carbon ring structures such as benzene, are one class of contaminant that must be removed. Most extraction processes are conducted in large towers that
    The distilling process of kerosene.
    The distilling process of kerosene.
    maximize the contact time between the kerosene and the extraction solvent. Solvents are chosen based on the solubility of the impurities. In other words, the chemical impurities are more soluble in the solvent than they are the kerosene. Therefore, as the kerosene flows through the tower, the impurities will tend to be drawn into the solvent phase. Once the contaminants have been pulled out of the kerosene, the solvent is removed leaving the kerosene in a more purified state. The following extraction techniques are used to purify kerosene.

    The Udex extraction process became popular in the United States during the 1970s. It uses a class of chemicals known as glycols as solvents. Both diethylene glycol and tetraethylene glycol are used because they have a high affinity for aromatic compounds.

    The Sulfolane process was created by the Shell company in 1962 and is still used in many extraction units 40 years later. The solvent used in this process is called sulfolane, and it is a strong polar compound that is more efficient than the glycol systems used in the Udex process. It has a greater heat capacity and greater chemical stability. This process uses a piece of equipment known as a rotating disk contractor to help purify the kerosene.

    The Lurgi Arosolvan Process uses N-methyl-2-pyrrolidinone mixed with water or glycol which increases of selectivity of the solvent for contaminants. This process involves a multiple stage extracting towers up to 20 ft (6 m) in diameter and 116 ft (35 m) high.

    The dimethyl sulfoxide process involves two separate extraction steps that increase the selectivity of the solvent for the aromatic contaminants. This allows extraction of these contaminants at lower temperatures. In addition, chemicals used in this process are non-toxic and relatively inexpensive. It uses a specialized column, known as a Kuhni column, that is up to 10 ft (3 m) in diameter.

    The Union Carbide process uses the solvent tetraethylene glycol and adds a second extraction step. It is somewhat more cumbersome than other glycol processes.

    The Formex process uses N-formyl morpholine and a small percentage of water as the solvent and is flexible enough to extract aromatics from a variety of hydrocarbon materials.

    The Redox process (Recycle Extract Dual Extraction) is used for kerosene destined for use in diesel fuel. It improves the octane number of fuels by selectively removing aromatic contaminants. The low aromatic kerosene produced by these process is in high demand for aviation fuel and other military uses.

Final processing

  • 8 After extraction is complete, the refined kerosene is stored in tanks for shipping. It is delivered by tank trucks to facilities where the kerosene is packaged for commercial use. Industrial kerosene is stored in large metal tanks, but it may be packaged in small quantities for commercial use. Metal containers may be used because kerosene is not a gas and does not require pressurized storage vessels. However, its flammability dictates that it must be handled as a hazardous substance.

Quality Control

The distillation and extraction processes are not completely efficient and some processing steps may have to be repeated to maximize the kerosene production. For example, some of the unconverted hydrocarbons may by separated by further distillation and recycled for another pass into the converter. By recycling the petroleum waste through the reaction sequence several times, the quality of kerosene production can be optimized.

By products/Waste

Some portion of the remaining petroleum fractions that can not be converted to kerosene may be used in other applications such as lubricating oil. In addition, some of the contaminants extracted during the purification process can be used commercially. These include certain aromatic compounds such as paraffin. The specifications for kerosene and these other petroleum byproducts are set by the American Society for Testing and Materials (ASTM) and the American Petroleum Institute (API).

The Future

The future of kerosene depends on the discovery of new applications as well as the development of new methods of production. New uses include increasing military demand for high grade kerosene to replace much of its diesel fuel with JP-8, which is a kerosene based jet fuel. The diesel fuel industry is also exploring a new process that involves adding kerosene to low sulfur diesel fuel to prevent it from gelling in cold weather. Commercial aviation may benefit by reducing the risk of jet fuel explosion by creating a new low-misting kerosene. In the residential sector, new and improved kerosene heaters that provide better protection from fire are anticipated to increase demand.

As demand for kerosene and its byproducts increases, new methods of refining and extracting kerosene will become even more important. One new method, developed by ExxonMobil, is a low-cost way to extract high purity normal paraffin from kerosene. This process uses ammonia that very efficiently absorbs the contaminants. This method uses vapor phase fixed-bed adsorption technology and yields a high level of paraffin that are greater than 90% pure.



Read more: http://www.madehow.com/Volume-7/Kerosene.html#ixzz0V7eWEcNu





Exploring the astrological nature of a newly discovered planet

Although it requires astrological research over a lengthy period to truly validate and hone in on the astrological nature of a newly discovered planet or asteroid, there are several things that we can explore to give us significant insight into the astrological nature of new astronomical bodies. These are outlined in "Guidelines on How to Explore the Astrology of Newly Discovered Objects in Our Solar System
". You may wish to read that first. It explains the meaning of a planet's Orbital Cross; a planet's aphelion, perihelion and its nodes in the ecliptic; as well as other things to consider.

The Orbits of Pluto & Orcus

Although Orcus is a bit smaller than Pluto, Orcus has a nearly identical orbital size, orbital period (year), and orbital inclination. However, Orcus' orbital plane's orientation in our solar system is tilted in the opposite direction from Pluto's. Orcus' orbit clearly reveals Orcus to be a compliment to Pluto. Due to their complimentary relationship, I present them together so we can get a better understanding of their similarities and differences.Orcus and Pluto Orbital Inclinations

Pluto & Orcus orbital planes and their
inclination to the ecliptic plane.
Pluto's Orbital Inclinations Orcus's Orbital Inclinations

Pluto's orbital inclination = 17.2°. Orcus' orbital inclination = 20.574°.
Pluto and Orcus' perihelions lie north of the ecliptic plane. However, their nodal axes lie
in opposite directions--creating an X in the eciptic with an arc separation of 22° 49'.

The Moons of Pluto & Orcus

Just as Pluto has Charon as its primary moon, so too does Orcus have a moon, discovered on Nov 13, 2005 by M.E. Brown and T.A. Suer. The moon of Orcus, recently named Vanth, was chosen in April 2009. Mike Brown, as the discoverer of Orcus, had the priviledge and was responsible for its naming. In Mike Brown's words as of March 29, 2009:

"The Moon of Orcus has about a ten day orbit around Orcus, in a tight precise circle. We suspect - though can’t yet prove - that Orcus and its satellite have their same faces locked towards each other constantly, like an orbiting dumbbell. Only one other Kuiper belt object and satellite are known to do this. Who? Pluto and Charon, of course.

The origin of the satellite of Orcus is confusing. Pluto and Charon are thought to have formed in a giant collision. Haumea clearly had a shattering blow to disperse moons and other family members. But small Kuiper belt objects are thought to acquired moons by simple capture.

Orcus is right in the middle. Was the satellite from a collision or a capture? We had hoped to answer this question by observations from the Hubble Space Telescope. If the satellite had looked just like other known collisional satellite, we would have been pretty convinced. It doesn’t. Unfortunately that tells us less. We can’t rule out either. We have some ideas of new Hubble Space Telescope observations to try to tell the difference. For now, though, we’re just confused."

Vanth is a daimon (mediator, gatekeeper, demigod) in Etruscan mythology who guides the dead to the underworld.

More on the naming of Orcus's Moon, Vanth, on Mike Brown's blog. (See April 09 entires)

Oil Exports visualized as Sankey Diagram

After my posts on visualizing Rotterdam port’s imports/exports and on Internet traffic maps, I have started to experiment with showing the export quantities and destinations for a certain trade good.

I wanted to do a Saudia-Arabia or Irak oil export Sankey map, but couldn’t find good data. I finally came across this summary on Lybian oil exports, and converted the data from the pie chart Lybian Oil Exports, by Destination, 2006 to a Sankey style export flow diagram.

Export data for oil from Lybia, shown as a pie chart has been converted to a Sankey diagram. It gives a good idea of where most exports go to. Created with e!Sankey, using a Wikicommons world map as backdrop.

It was new to me that “Libya has the largest proven oil reserves in Africa” with 41.5 billion barrels, and estimated net exports of 1.525 million barrels per day in 2006.

The underlying map is a crop from a World map found on Wikicommons. I think it could be a little more transparent though…

U.S. Oil Import Sankey Movie

Renown Rocky Mountain Institute (RMI) founded in 1982 by Lovins and Lovins have an interactive oil imports map on their MOVE project webpage.

You can see the oil imports to the United States from January 1973 to August 2008 on a map that depicts the flow quantities as Sankey arrows linking the country of origin and the U.S. If you switch to the unit “Dollar”, you can see the value of the oil imported depicted as Sankey arrows.

One can play the the whole 35-year period as a movie, or use the slider on the time line to see individual months. The data used is from publicy accessible EIA/DOE statistics.

A screenshot from RMI's interactive U.S. Oil Import Map showing the quantities of crude oil imports from different countries as Sankey arrows. Go to http://move.rmi.org/files/oilmap/RMI_Oil_Imports_Final_large.html to see the map for the period 1973 to 20

The United States is still 60 % dependent on imported oil. MRI’s MOVE project seeks possibilities to reduce foreign crude oil dependencies. The goal is to “get completely off oil by 2050, led by business for profit.”

Go to the RMI movie page and try it yourself. When I did the Lybia Oil Export map last year I wasn’t aware of this Sankey movie, which is of course much nicer.

Technology of Recuperation Gold Fines & Ultrafines

Brazilian Experience
Starting from the distribution of gold in the different granulometric fractions of the feeding of the plant, it is verified that:

- the average tenor calculated of gold in the feeding of the plant is nearly 3.3 g/t (considering a density in situ of the mineral of 1.6 t/m3);

- the mineral presents 69.1% in fine masses under 200 nets (74 micras), which present 46.3% of the gold contained in the mineral;

- of the fraction of 200 nets, barely 7.4% in mass and 9.3% of gold in total are understood among this net (74 micras) and 10 micras, which shows the existence of 61.70% of fines under 10 micras which carry 37% of the gold contained in the mineral.

- the gross fractions present elevated tenors of gold distributed in an irregular form along the granulometry, with higher values in the fractions + 6#, = 28 + 35# and -65 + 200#.

In the results of the separation in dense-liquid, one verifies that there exist elevated percentages in weight of heavy minerals in gross fractions, which fall a lot in the finer fractions. parallel to it one can observe in these heavy gold tenors, of the order of tens of grams per tonneage in the gross fractions, which pass to the order of hundreds of grams in the gross fractions, which pass to the order of hundreds of grams in finer fractions than 28 nets.

Mineralogically the mineral is basically composed by quartz, and clay-limenites of varied porosity are added, iron oxyde with varied grades of hydration and clay-minerals. As lower constituents, the presence of iron sulphate was observed.

The floated product in dense media is composed essentially by quartz and clay addings, while being deepened it is constituted by addings of clay-limenites and iron oxydes.

We observed the presence of free gold barely in the finer granulometry than 28 nets (liberation net). In the gross fractions this occurs associated in the addings of clay-limenites and, starting from 28 nets, shows a liberation of the order of 95%, being the 5% remaining associated to addings which compose of the depth.

In the liberated fractions the gold is presented under the form of pellets with dentritic and equidimentional forms, being extremely strange the presence of laminated particles in the finer pelletmetries. Some grains are shown superficially impregnated by a particle of clay-limenite which comes to cover up to 30% of the surface of the same in the fraction -28 + 350, diminishing in the meantime in the finer fractions and being practically non-existant in the fractions below 65 nets.

With the objective of investigating the form of the gold occurrences, practices of separation were performed in scales of pieces of masonry including separations in dense liquids and electromagnetic separations. It was verified that the lights are constituted, essentially, by microcrystalline addings, quartz-clays with some limeniting, the gold associated is in mixed grains where it represents less than 4% of weight.

The intermediatries represent the same addings but with higher amounts of iron oxide associated. The gold represents less than 8% in weight in mixed grains. The gold contained in the liberated products are shown practically all free, those products are constituted basically by liberated cyrconite in the shape of nuggets.

In the thicker fraction, almost 50% of the gold contained is shown liberated in finer nets, that value reaches a maximum of 85%. The data obtained in the characterization of the rejection of the deslamators show that, at least, 25% of the 37% of the gold contained in the dislamated material is liberated.

The main general conclusions of these studies of characterization of the feeding and rejections of the plant of Salamangone, were the following:

a. the present gold in the rejections is, mainly, fine and ultrafine, non-recuperable by gravimetric processes;

b. almost 65% of the gold contained in the rejection of the dislamators are ultrafine, under 2 micras. What is retained in the gross fractions is found, in part, associated to addings pf quartz and/or clay minerals and/or iron oxydes, but mainly free, in an appropriate partition of 10% and 25% of the total content.