Custom Search

giant bug eye satellite camera could capture an entire city

Satellite imagery has become part of our everyday lives through applications likeGoogle Maps. However, the current technology involves capturing tons of high-resolution images and stitching them together to form one larger image. This not only creates a huge amount of work to precisely align these images, it also leaves live-action surveillance susceptible to drop-outs as subjects move between cameras (yeah, I’ve seen 24 too).

Satellite Lens Array

It turns out that a team from Sony and the University of Alabama are working on an imaging system that can capture a huge area with a single camera. The imaging system would actually be built up from a large array of light-sensitive chips, all placed at in the focal plane of a large multiple lens system. The end result doesn’t look that much different than the complex eye of an insect.

One major advantage of a single camera approach is that near real time images could be transmitted to ground personnel, without the overhead of joining multiple images together. Also, this approach would allow for recording sequential images (the current design could support a rate of up to 4 frames per second).

According to the team’s recently published patent application, the camera could image an area of up to 10 square kilometers from a 7.5 kilometer altitude. The camera’s gigapixel resolution would allow it to capture images at a precision of up to 50 centimeters per pixel from that height.



Kerosene is an oil distillate commonly used as a fuel or solvent. It is a thin, clear liquid consisting of a mixture of hydrocarbons that boil between 302°F and 527°F (150°C and 275°C). While kerosene can be extracted from coal, oil shale, and wood, it is primarily derived from refined petroleum. Before electric lights became popular, kerosene was widely used in oil lamps and was one of the most important refinery products. Today kerosene is primarily used as a heating oil, as fuel in jet engines, and as a solvent for insecticide sprays.


Petroleum byproducts have been used since ancient times as adhesives and water proofing agents. Over 2,000 years ago, Arabian scientists explored ways to distill petroleum into individual components that could be used for specialized purposes. As new uses were discovered, demand for petroleum increased. Kerosene was discovered in 1853 by Abraham Gesner. A British physician, Gesner developed a process to extract the inflammable liquid from asphalt, a waxy petroleum mixture. The term kerosene is, in fact, derived from the Greek word for wax. Sometimes spelled kerosine or kerosiene, it is also called coal oil because of its asphalt origins.

Kerosene was an important commodity in the days before electric lighting and it was the first material to be chemically extracted on a large commercial scale. Mass refinement of kerosene and other petroleum products actually began in 1859 when oil was discovered in the United States. An entire industry evolved to develop oil drilling and purification techniques. Kerosene continued to be the most important refinery product throughout the late 1890s and early 1900s. It was surpassed by gasoline in the 1920s with the increasing popularity of the internal combustion engine. Other uses were found for kerosene after the demise of oil lamps, and today it is primarily used in residential heating and as a fuel additive. In the late 1990s, annual production of kerosene had grown to approximately 1 billion gal (3.8 billion 1) in the United States alone.

Raw Materials

Kerosene is extracted from a mixture of petroleum chemicals found deep within the earth. This mixture consists of oil, rocks, water, and other contaminates in subterranean reservoirs made of porous layers of sandstone and carbonate rock. The oil itself is derived from decayed organisms that were buried along with the sediments of early geological eras. Over tens of millions of years, this organic residue was converted to petroleum by a pair of complex chemical processes known as diagenesis and catagensis. Diagenesis, which occurs below 122°F (50°C), involves both microbial activity and chemical reactions such as dehydration, condensation, cyclization, and polymerization. Catagenesis occurs between 122°F and 392°F (50°C and 200°C) and involves thermocatalytic cracking, decarboxylation, and hydrogen disproportionation. The combination of these complex reactions creates the hydrocarbon mixture known as petroleum.

The Manufacturing

Crude oil recovery

  • 1 The first step in the manufacture of kerosene is to collect the crude oil. Most oil supplies are buried deep beneath the earth and there are three primary types of drilling operations used to bring it to the surface. One method, Cable-Tooled Drilling, involves using a jackhammer chisel to dislodge rock and dirt to create a tunnel to reach oil deposits that reside just below the earth's surface. A second process, Rotary Drilling, is used to reach oil reservoirs that are much deeper underground. This process requires sinking a drill pipe with a rotating steel bit into the ground. This rotary drill spins rapidly to pulverize earth and rock. The third drilling process is Off Shore Drilling and it uses a large ocean borne platform to lower a shaft to the ocean floor.
  • 2 When any of these drilling processes break into an underground reservoir, a geyser erupts as dissolved hydrocarbon gases push the crude oil to the surface. These gases will force about 20% of the oil out of the well. Water is then pumped into the well to flush more of the oil out. This flushing process will recover about 50% of the buried oil. By adding a surfactant to the water even more oil can be recovered. However, even with the most rigorous flushing it is still impossible to remove 100% of the oil trapped underground. The crude oil recovered is pumped into large storage tanks and transported to a refining site.
  • 3 After the oil is collected, gross contaminants such as gases, water, and dirt are removed. Desalting is one cleansing operation that can be performed both in the oilfield and at the refinery site. After the oil has been washed, the water is separated from the oil. The properties of the crude oil are evaluated to determine which petroleum products can best be extracted from it. The key properties of interest include density, sulfur content, and other physical properties of the oil related to its carbon chain distribution. Since crude oil is a combination of many different hydrocarbon materials that are miscible in one another, it must be separated into its components before it can be turned into kerosene.


  • 4 Distillation is one type of separation process involves heating the crude oil to separate its components. In this process the stream of oil is pumped into the bottom of a distillation column where it is heated. The lighter hydrocarbon components in the mixture rise to the top of the column and most of the high boiling-point fractions are left at the bottom. At the top of the column these lighter vapors reach the condenser which cools them and returns them to a liquid state. The columns used to separate lighter oils are proportionally tall and thin (up to 116 ft [35 m] tall) because they only require atmospheric pressure. Tall distillation columns can more efficiently separate hydrocarbon mixtures because they allow more time for the high boiling compounds to condense before they reach the top of the column.

    To separate some of the heavier fractions of oil, distillations columns must be operated at approximately one tenth of atmospheric pressure (75 mm Hg). These vacuum columns are structured to be very wide and short to help control pressure fluctuations. They can be over 40 ft (12 m) in diameter.

  • 5 The condensed liquid fractions can be collected separately. The fraction that is collected between 302°F and 482°F (150°C and 250°C) is kerosene. By comparison, gasoline is distilled between 86°F and 410°F (30°C and 210°C). By recycling the distilled kerosene through the column multiple times its purity can be increased. This recycling process is known as refluxing.


  • 6 Once the oil has been distilled into its fractions, further processing in a series of chemical reactors is necessary to create kerosene. Catalytic reforming, akylkation, catalytic cracking, and hydroprocessing are four of the major processing techniques used in the conversion of kerosene. These reactions are used to control the carbon chain distribution by adding or removing carbon atoms from the hydrocarbon backbone. These reaction processes involve transferring the crude oil fraction into a separate vessel where it is chemically converted to kerosene.
  • 7 Once the kerosene has been reacted, additional extraction is required to remove secondary contaminants that can affect the oil's burning properties. Aromatic compounds, which are carbon ring structures such as benzene, are one class of contaminant that must be removed. Most extraction processes are conducted in large towers that
    The distilling process of kerosene.
    The distilling process of kerosene.
    maximize the contact time between the kerosene and the extraction solvent. Solvents are chosen based on the solubility of the impurities. In other words, the chemical impurities are more soluble in the solvent than they are the kerosene. Therefore, as the kerosene flows through the tower, the impurities will tend to be drawn into the solvent phase. Once the contaminants have been pulled out of the kerosene, the solvent is removed leaving the kerosene in a more purified state. The following extraction techniques are used to purify kerosene.

    The Udex extraction process became popular in the United States during the 1970s. It uses a class of chemicals known as glycols as solvents. Both diethylene glycol and tetraethylene glycol are used because they have a high affinity for aromatic compounds.

    The Sulfolane process was created by the Shell company in 1962 and is still used in many extraction units 40 years later. The solvent used in this process is called sulfolane, and it is a strong polar compound that is more efficient than the glycol systems used in the Udex process. It has a greater heat capacity and greater chemical stability. This process uses a piece of equipment known as a rotating disk contractor to help purify the kerosene.

    The Lurgi Arosolvan Process uses N-methyl-2-pyrrolidinone mixed with water or glycol which increases of selectivity of the solvent for contaminants. This process involves a multiple stage extracting towers up to 20 ft (6 m) in diameter and 116 ft (35 m) high.

    The dimethyl sulfoxide process involves two separate extraction steps that increase the selectivity of the solvent for the aromatic contaminants. This allows extraction of these contaminants at lower temperatures. In addition, chemicals used in this process are non-toxic and relatively inexpensive. It uses a specialized column, known as a Kuhni column, that is up to 10 ft (3 m) in diameter.

    The Union Carbide process uses the solvent tetraethylene glycol and adds a second extraction step. It is somewhat more cumbersome than other glycol processes.

    The Formex process uses N-formyl morpholine and a small percentage of water as the solvent and is flexible enough to extract aromatics from a variety of hydrocarbon materials.

    The Redox process (Recycle Extract Dual Extraction) is used for kerosene destined for use in diesel fuel. It improves the octane number of fuels by selectively removing aromatic contaminants. The low aromatic kerosene produced by these process is in high demand for aviation fuel and other military uses.

Final processing

  • 8 After extraction is complete, the refined kerosene is stored in tanks for shipping. It is delivered by tank trucks to facilities where the kerosene is packaged for commercial use. Industrial kerosene is stored in large metal tanks, but it may be packaged in small quantities for commercial use. Metal containers may be used because kerosene is not a gas and does not require pressurized storage vessels. However, its flammability dictates that it must be handled as a hazardous substance.

Quality Control

The distillation and extraction processes are not completely efficient and some processing steps may have to be repeated to maximize the kerosene production. For example, some of the unconverted hydrocarbons may by separated by further distillation and recycled for another pass into the converter. By recycling the petroleum waste through the reaction sequence several times, the quality of kerosene production can be optimized.

By products/Waste

Some portion of the remaining petroleum fractions that can not be converted to kerosene may be used in other applications such as lubricating oil. In addition, some of the contaminants extracted during the purification process can be used commercially. These include certain aromatic compounds such as paraffin. The specifications for kerosene and these other petroleum byproducts are set by the American Society for Testing and Materials (ASTM) and the American Petroleum Institute (API).

The Future

The future of kerosene depends on the discovery of new applications as well as the development of new methods of production. New uses include increasing military demand for high grade kerosene to replace much of its diesel fuel with JP-8, which is a kerosene based jet fuel. The diesel fuel industry is also exploring a new process that involves adding kerosene to low sulfur diesel fuel to prevent it from gelling in cold weather. Commercial aviation may benefit by reducing the risk of jet fuel explosion by creating a new low-misting kerosene. In the residential sector, new and improved kerosene heaters that provide better protection from fire are anticipated to increase demand.

As demand for kerosene and its byproducts increases, new methods of refining and extracting kerosene will become even more important. One new method, developed by ExxonMobil, is a low-cost way to extract high purity normal paraffin from kerosene. This process uses ammonia that very efficiently absorbs the contaminants. This method uses vapor phase fixed-bed adsorption technology and yields a high level of paraffin that are greater than 90% pure.

Read more:

Exploring the astrological nature of a newly discovered planet

Although it requires astrological research over a lengthy period to truly validate and hone in on the astrological nature of a newly discovered planet or asteroid, there are several things that we can explore to give us significant insight into the astrological nature of new astronomical bodies. These are outlined in "Guidelines on How to Explore the Astrology of Newly Discovered Objects in Our Solar System
". You may wish to read that first. It explains the meaning of a planet's Orbital Cross; a planet's aphelion, perihelion and its nodes in the ecliptic; as well as other things to consider.

The Orbits of Pluto & Orcus

Although Orcus is a bit smaller than Pluto, Orcus has a nearly identical orbital size, orbital period (year), and orbital inclination. However, Orcus' orbital plane's orientation in our solar system is tilted in the opposite direction from Pluto's. Orcus' orbit clearly reveals Orcus to be a compliment to Pluto. Due to their complimentary relationship, I present them together so we can get a better understanding of their similarities and differences.Orcus and Pluto Orbital Inclinations

Pluto & Orcus orbital planes and their
inclination to the ecliptic plane.
Pluto's Orbital Inclinations Orcus's Orbital Inclinations

Pluto's orbital inclination = 17.2°. Orcus' orbital inclination = 20.574°.
Pluto and Orcus' perihelions lie north of the ecliptic plane. However, their nodal axes lie
in opposite directions--creating an X in the eciptic with an arc separation of 22° 49'.

The Moons of Pluto & Orcus

Just as Pluto has Charon as its primary moon, so too does Orcus have a moon, discovered on Nov 13, 2005 by M.E. Brown and T.A. Suer. The moon of Orcus, recently named Vanth, was chosen in April 2009. Mike Brown, as the discoverer of Orcus, had the priviledge and was responsible for its naming. In Mike Brown's words as of March 29, 2009:

"The Moon of Orcus has about a ten day orbit around Orcus, in a tight precise circle. We suspect - though can’t yet prove - that Orcus and its satellite have their same faces locked towards each other constantly, like an orbiting dumbbell. Only one other Kuiper belt object and satellite are known to do this. Who? Pluto and Charon, of course.

The origin of the satellite of Orcus is confusing. Pluto and Charon are thought to have formed in a giant collision. Haumea clearly had a shattering blow to disperse moons and other family members. But small Kuiper belt objects are thought to acquired moons by simple capture.

Orcus is right in the middle. Was the satellite from a collision or a capture? We had hoped to answer this question by observations from the Hubble Space Telescope. If the satellite had looked just like other known collisional satellite, we would have been pretty convinced. It doesn’t. Unfortunately that tells us less. We can’t rule out either. We have some ideas of new Hubble Space Telescope observations to try to tell the difference. For now, though, we’re just confused."

Vanth is a daimon (mediator, gatekeeper, demigod) in Etruscan mythology who guides the dead to the underworld.

More on the naming of Orcus's Moon, Vanth, on Mike Brown's blog. (See April 09 entires)

Oil Exports visualized as Sankey Diagram

After my posts on visualizing Rotterdam port’s imports/exports and on Internet traffic maps, I have started to experiment with showing the export quantities and destinations for a certain trade good.

I wanted to do a Saudia-Arabia or Irak oil export Sankey map, but couldn’t find good data. I finally came across this summary on Lybian oil exports, and converted the data from the pie chart Lybian Oil Exports, by Destination, 2006 to a Sankey style export flow diagram.

Export data for oil from Lybia, shown as a pie chart has been converted to a Sankey diagram. It gives a good idea of where most exports go to. Created with e!Sankey, using a Wikicommons world map as backdrop.

It was new to me that “Libya has the largest proven oil reserves in Africa” with 41.5 billion barrels, and estimated net exports of 1.525 million barrels per day in 2006.

The underlying map is a crop from a World map found on Wikicommons. I think it could be a little more transparent though…

U.S. Oil Import Sankey Movie

Renown Rocky Mountain Institute (RMI) founded in 1982 by Lovins and Lovins have an interactive oil imports map on their MOVE project webpage.

You can see the oil imports to the United States from January 1973 to August 2008 on a map that depicts the flow quantities as Sankey arrows linking the country of origin and the U.S. If you switch to the unit “Dollar”, you can see the value of the oil imported depicted as Sankey arrows.

One can play the the whole 35-year period as a movie, or use the slider on the time line to see individual months. The data used is from publicy accessible EIA/DOE statistics.

A screenshot from RMI's interactive U.S. Oil Import Map showing the quantities of crude oil imports from different countries as Sankey arrows. Go to to see the map for the period 1973 to 20

The United States is still 60 % dependent on imported oil. MRI’s MOVE project seeks possibilities to reduce foreign crude oil dependencies. The goal is to “get completely off oil by 2050, led by business for profit.”

Go to the RMI movie page and try it yourself. When I did the Lybia Oil Export map last year I wasn’t aware of this Sankey movie, which is of course much nicer.

Technology of Recuperation Gold Fines & Ultrafines

Brazilian Experience
Starting from the distribution of gold in the different granulometric fractions of the feeding of the plant, it is verified that:

- the average tenor calculated of gold in the feeding of the plant is nearly 3.3 g/t (considering a density in situ of the mineral of 1.6 t/m3);

- the mineral presents 69.1% in fine masses under 200 nets (74 micras), which present 46.3% of the gold contained in the mineral;

- of the fraction of 200 nets, barely 7.4% in mass and 9.3% of gold in total are understood among this net (74 micras) and 10 micras, which shows the existence of 61.70% of fines under 10 micras which carry 37% of the gold contained in the mineral.

- the gross fractions present elevated tenors of gold distributed in an irregular form along the granulometry, with higher values in the fractions + 6#, = 28 + 35# and -65 + 200#.

In the results of the separation in dense-liquid, one verifies that there exist elevated percentages in weight of heavy minerals in gross fractions, which fall a lot in the finer fractions. parallel to it one can observe in these heavy gold tenors, of the order of tens of grams per tonneage in the gross fractions, which pass to the order of hundreds of grams in the gross fractions, which pass to the order of hundreds of grams in finer fractions than 28 nets.

Mineralogically the mineral is basically composed by quartz, and clay-limenites of varied porosity are added, iron oxyde with varied grades of hydration and clay-minerals. As lower constituents, the presence of iron sulphate was observed.

The floated product in dense media is composed essentially by quartz and clay addings, while being deepened it is constituted by addings of clay-limenites and iron oxydes.

We observed the presence of free gold barely in the finer granulometry than 28 nets (liberation net). In the gross fractions this occurs associated in the addings of clay-limenites and, starting from 28 nets, shows a liberation of the order of 95%, being the 5% remaining associated to addings which compose of the depth.

In the liberated fractions the gold is presented under the form of pellets with dentritic and equidimentional forms, being extremely strange the presence of laminated particles in the finer pelletmetries. Some grains are shown superficially impregnated by a particle of clay-limenite which comes to cover up to 30% of the surface of the same in the fraction -28 + 350, diminishing in the meantime in the finer fractions and being practically non-existant in the fractions below 65 nets.

With the objective of investigating the form of the gold occurrences, practices of separation were performed in scales of pieces of masonry including separations in dense liquids and electromagnetic separations. It was verified that the lights are constituted, essentially, by microcrystalline addings, quartz-clays with some limeniting, the gold associated is in mixed grains where it represents less than 4% of weight.

The intermediatries represent the same addings but with higher amounts of iron oxide associated. The gold represents less than 8% in weight in mixed grains. The gold contained in the liberated products are shown practically all free, those products are constituted basically by liberated cyrconite in the shape of nuggets.

In the thicker fraction, almost 50% of the gold contained is shown liberated in finer nets, that value reaches a maximum of 85%. The data obtained in the characterization of the rejection of the deslamators show that, at least, 25% of the 37% of the gold contained in the dislamated material is liberated.

The main general conclusions of these studies of characterization of the feeding and rejections of the plant of Salamangone, were the following:

a. the present gold in the rejections is, mainly, fine and ultrafine, non-recuperable by gravimetric processes;

b. almost 65% of the gold contained in the rejection of the dislamators are ultrafine, under 2 micras. What is retained in the gross fractions is found, in part, associated to addings pf quartz and/or clay minerals and/or iron oxydes, but mainly free, in an appropriate partition of 10% and 25% of the total content.

Panning for Gold

In the last decade auriferous activity has gone through substantial changes all over the world due to global warming, having taken effect in the countries with big gold investment programs, elevating in this way production substantially. This also brought about as a consequence a way to research and investigation of modern and appropriate technologies, tendencies for cost reduction, achieve high productivity and increase production.

A great part of this scientific technological experience all around the world is seen in this site. In the last years different countries have been able to take on modern techniques as far as the different stages of procedure of production and industrialization of the yellow metal. Some of the different technological innovations that have take place are:

The procedure of activated carbon in pulp, that brings about the recovery of gold through precipitation with zinc and electro deposition.

Cyanidation is a simple and economic procedure that allows the opening of deposits with contents or the continuance of old renovations; besides these renovations, these days we are highlighting more and more over the usage of the oldest technique that is known of: gravimetry. All this has to do with is placement of gravimetric tools that allow high recovery of gold.

The particularity of the processing of gold minerals does not reside in the usage of certain specific techniques, but in the combination of technologies based on an exhaustive mineral study, following along with the old belief that there are not two mines that are the same, therefore, one specific technology is not applicable to two mines.

The idea is to put in the hands of those that are interested and immersed in the auriferous activity, some information of council that will support your work with the idea of obtaining an optimal and productive operation.

The following has been separated into parts in order to give you an integral vision of this activity. From the aspects remember to keep in mind that in order to take on an auriferous mining project the first part takes on aspects that need to be taken account of to understand and carry out exploration work, in the second part we focus on different types of deposits and besides this we also focus also on a combination of processing techniques of the minerals in an appropriate way, like the application of the extraction technology, concentration and recovery of gold, according to the type of deposit and its mineralogical association in order to obtain good results.

Without a doubt the idea of this article is to provide the best and most information about this important theme. The idea is to cover a good number of aspects, however, in general we hope that it is useful.

Gold Prospecting & Panning | Gold Mining & Metallurgy

gold panning & miningWe are going to show you the basic theory and methodology you need to find where gold is present, how to identify it and what alternatives there are for the extraction of gold via panning or other. In short, this site will show you everything you need to know about gold prospecting. Although we'll explain to great extend the "how to" of searching for mineral deposits within ore veins, this guide considers free gold ( Gold Placers of various types ) as the main source of this precious metal. Also learn how to invest in gold via our other detailed guide.

Mining extraction and metallurgy of gold has been pursued with great interest since it developed intrinsic value as a result of its economic and physical properties, its decorative appeal and its scarcity. During much time gold has been the most attractive metal on the world, and still to this day is one the ones most fever and many developments and new ideas have appeared as a consequence of its huge value as an economic metal.

The gold mining industry has grown considerably, and it appeared to the writer that the present would be a propitious time to bring out some guides in order to understand gold prospecting and gold mining. What has been goal of the site is to make "Prospecting for Gold" a compendium, in especially concrete form, of useful information respecting the processes of winning from the soil and the after- treatment of gold and gold ores, including some original suggestions. Practical information, original and selected, mining jobs are given to mining company directors, mine managers, mill operators, and prospectors. In each part, will be found a large number of useful hints on subjects directly and indirectly connected with the gold mining.

All the information should be very useful and surely is original, and each reader will be able to understand the difficult task of processing gold ores found in the veins that bear it. You'll learn the art of extracting valuable metals from gravel placers.

Here, you can learn about metal detecting as well as gold panning. You get to learn why gold placers will form where they form. You'll learn how to look and how to find gold placers (that's prospecting!) and analyze it.

Those unfamiliar with prospecting and mining will find a great Glossary of the Mining Trade's Terms. The characteristics of an ore deposit and its minerals assemblages (mineralization) determine mining methods, extraction process (recovery methods & equipment needed), and the performance of all chemical processes involved in gold extraction. Thus, a good knowledge of an ore is required to develop a gold extraction and the efficiency of the process.

The gold mineralogy can offer the following variations:

  • Gold occurrence, showings.
  • Gold particle size and distribution.
  • Type of gangue.
  • Mineralogical association.
  • Changes in mineral.
  • Changes during the time.

The Quest For Human Level Artificial Intelligence: AGI-09 and Ben Goertzel

The second conference on Artificial General Intelligence (AGI) is set to convene in Arlington, Virginia on March 6-9, 2009. The conference stands uniquely as one of the few (only?) conferences that focuses solely on the ambitious goal of creating true human level artificial intelligence.

The conference, dubbed AGI-09, comes on the heels of last year’s successful conference in 2008, AGI-08. The schedule for the conference is intense, offering attendees a three day lineup of keynote speeches, workshops, moderated discussions, and presentations.

Singularity Hub has negotiated a 10% discount on admission to the conference to the first five readers that signup. The discount only applies to regular price admission (it cannot be added on top of an AAAI discount or student discount). To take advantage of this discount, simply be one of the first 5 people to mention Singularity Hub when you register.

The Hub’s Keith Kleiner recently had a chance to speak with AGI-09 creator and conference chair Ben Goertzel to ask him about the conference and the field of AI in general. Below is a short bio on Goertzel, followed by a summary of Kleiner’s discussion with Goertzel:

Goertzel is the founder and CEO of Novamente, a company that supplies software productsand services to power intelligent virtual agents for virtual worlds, computer games, and simulations. Goertzel has been showing up everywhere, presenting at the Singularity Summit in 2007 and 2008 and also presenting this tech talk at Google. Goertzel is the driving force behind, a collaborative website for the open source development of human level artificial intelligence, reported on earlier at Singularity Hub.

I asked Goertzel if he was aware of any conference other than AGI-09 that was focused solely on the creation of human level artificial intelligence. I was surprised to hear that although there are small workshops within other conferences, AGI-09 is the only full-scale conference that focuses on the topic. The AAAI and IEEE conferences dedicate small portions of their conference schedules to human level artificial intelligence, but mostly focus on solving what is called narrow AI, which are problems directed at solving a single particular problem, such as credit card fraud. Conferences like AGI-09 that are finally getting researchers to again attempt to tackle the problem of creating human level artificial intelligence (known as strong AI) after more than a decade of practically ignoring the challenge because it was seen as too hard.

Goertzel pointed out that narrow AI is becoming significantly more advanced and is showing up everywhere in our lives. A few examples: credit card fraud detection, chess-playing, scientific data analysis, scheduling military operations, and controlling non-player characters in games. With the relentless advance of Moore’s law and accelerating technology, interest and confidence in our ability to create human level artificial intelligence is slowly on the rise. Goertzel is hopeful that someone in the next few years (perhaps even one of his efforts) will achieve a major success that will alter the landscape and cause others to join the cause in greater numbers.

A major focus for Goertzel at the moment is the creation of a virtual animal, such as a dog or a parrot, whose language skills improve over time as it interacts with humans. Were such a virtual animal to achieve a respectable level of language competence, it could serve as the lightning rod that would evangelize others to more seriously take on the problem of developing human level AI. I suggested Goertzel make a virtual person instead of a virtual animal, but to this he responded that his decision to focus on virtual animals instead of virtual humans is for a specific reason. Goertzel says that people change their expectations when approaching a virtual animal vs approaching a virtual human. Even if the software is really, really good, if it does not exactly match human ability then people will quickly disregard the achievements of the software when emulating a human. However, when people see the same software powering a virtual animal, they are more willing to credit and appreciate the achievements of the software.

When asked what he would really like to see from the AI community in the near term, Goertzel stated he would like to see us develop and converge on a detailed roadmap for the creation of human level artificial intelligence. Such a roadmap could follow in the footsteps of theMetaverse Roadmap, a successful initiative that the virtual world community is using for the creation of advanced, immersive online 3D worlds within the next ten years. Sounds like a good idea to me! If Goertzel or someone reading this blog post doesn’t beat us to it, perhaps Singularity Hub will lead the beginning of such a roadmap.

Vehicle Routing Software Tools Logistics Software

Vehicle Routing Software Tools

These vehicle routing software tools and logistics software provide all the software functions you will ever need to add vehicle routing and vehicle scheduling capability to your existing, in house, software applications.

Vehicle Routing Software Tools

You do not have to throw away your current applications in order to introduce vehicle routing and scheduling to your existing systems. See example of how easily it could be done.

Vehicle Routing Software Tools

You can build your system incrementally at your own pace and match your staff learning curve. No need to introduce drastic changes to your normal operating procedures with loss of productivity and new training costs.


Logistics Software

Our vehicle routing software tools are state of the art components in routing technology and logistics software that can be integrated in most applications where a fast response is imperative in calculating the optimum path between two points of a routing network. Typical applications are: vehicle routing software, vehicle scheduling software, logistics software, fleet management software, passenger information in transit networks, and others. Our vehicle routing software tools have been tested in the field extensively in small and large road networks and are totally independent of any specific database or any specific geographic information system.

You, the developer or the experienced end user, of vehicle routing software or logistics software can easily and flexibly integrate these vehicle routing software tools into your current system tools with a minimum of development time and cost. Our vehicle routing software tools match or exceed the specification or functionality of the mayor players in the field of logistics software.

Our vehicle routing software tools when used in conjunction with our vehicle scheduling software take into account the network link travel speed, left/right turn restrictions, one ways streets and the network delays in building a vehicle route and schedule.

Our routing software tools when used in conjunction with passenger information systems for transit networks also uses the fixed route bus schedules, trolley and subway schedules, and passenger transfer point delays in order to calculate the best passenger route.

DHS invests in mind-reading anti-terrorist technology -- and staff phrenologists to interpret the results

Hurrah -- the DHS is buying mind-reading machines that can tell you're a terrorist by examining the terrorist-thought-center of your brain. People with failing brains will be sent for corrective surgery.

MALINTENT, the brainchild of the cutting-edge Human Factors division in Homeland Security's directorate for Science and Technology, searches your body for non-verbal cues that predict whether you mean harm to your fellow passengers.

It has a series of sensors and imagers that read your body temperature, heart rate and respiration for unconscious tells invisible to the naked eye — signals terrorists and criminals may display in advance of an attack.

U.S. Developing Mind Reading Technology

The use of scan­ners to read brain sig­nals allowed the researchers to cor rectly deter mine which of two images their guinea pigs were look ing at 80 per cent of the time. The test is one in a series in which sci en tists have read minds using Mag netic Res o nance Imag ing (MRI) scan ners, which are nor mally used in hos pi tals to detect the flow of blood around the brain using a radio mag netic field and radio waves.

Dr Stephanie Har ri son, who led the study at Van der bilt Uni ver sity in Nashville, asked six vol un teers to look at dif fer ent images on a screen – one of a cir cle with almost hor i zon tal lines across it and one of a cir cle with almost ver ti cal lines across it.

As they were shown the images, mon i tor ing showed that dif fer ent sides of their brains had lit up.

They were then asked to remem ber one par tic u lar cir cle and, from look ing at the pat tern of brain activ ity, the researchers were able to tell with con sid er able accu racy which one they were think­ing of.

Writ ing in the jour nal Nature, Dr Har ri son said: “Decod ing accu­racy greatly exceeded chance-level per for mance of 50 per cent and proved highly reli able in the six par tic i pants.”

While the study does not unlock the secrets of mind-reading or thought pre dic tion, it does allow sci en tists to deter mine which parts of the brain are involved in short-term visual mem ory.

Pre vi ously, sci en tists in Cal i for nia asked vol un teers to look at 1,750 images then used MRI scans to cor rectly judge, in nine out of ten cases, which one they were think ing of.

Lead researcher Dr Jack Gal lant warned after the results were pub lished last year: “It is pos si ble decod ing brain activ ity could have seri ous eth i cal and pri vacy impli ca tions. We believe that no one should be sub jected to any form of brain-reading invol un tar­ily, covertly, or with out informed con sent.”