Thursday, 31 August 2017
Wednesday, 30 August 2017
LM3900
Description
These devices consist of four independent, high-gain frequency-compensated Norton operational amplifiers that were designed specifically to operate from a single supply over a wide range of voltages. Operation from split supplies is also possible. The low supply current drain is essentially independent of the magnitude of the supply voltage. These devices provide wide band- width and large output voltage swing.
Features
- Wide Range of Supply Voltages, Single or Dual Supplies
- Wide Bandwidth
- Large Output Voltage Swing
- Output Short-Circuit Protection
- Internal Frequency Compensation
Tuesday, 29 August 2017
quick update on the content
Hi folks
I'm still buried under a massive work load but I'm hoping to have it cleared up by the end of September so the content will be of a better quality soon apart please bear with me for just a little while longer I am hoping to have a review of a clamp meter that I got from maplins up in the next few days keep an eye out for that
thanks
dobby
I'm still buried under a massive work load but I'm hoping to have it cleared up by the end of September so the content will be of a better quality soon apart please bear with me for just a little while longer I am hoping to have a review of a clamp meter that I got from maplins up in the next few days keep an eye out for that
thanks
dobby
Monday, 28 August 2017
Crystal oscillator
A crystal oscillator is an electronic oscillator circuit that uses the mechanical resonance of a vibrating crystal of piezoelectric material to create an electrical signal with a precise frequency. This frequency is commonly used to keep track of time, as in quartz wristwatches, to provide a stable clock signal for digital integrated circuits, and to stabilize frequencies for radio transmitters and receivers. The most common type of piezoelectric resonator used is the quartz crystal, so oscillator circuits incorporating them became known as crystal oscillators, but other piezoelectric materials including polycrystalline ceramics are used in similar circuits.
A crystal oscillator, particularly one made of quartz crystal, works by being distorted by an electric field when voltage is applied to an electrode near or on the crystal. This property is known as electrostriction or inverse piezoelectricity. When the field is removed, the quartz - which oscillates in a precise frequency - generates an electric field as it returns to its previous shape, and this can generate a voltage. The result is that a quartz crystal behaves like an RLC circuit.
Quartz crystals are manufactured for frequencies from a few tens of kilohertz to hundreds of megahertz. More than two billion crystals are manufactured annually. Most are used for consumer devices such as wristwatches, clocks, radios, computers, and cellphones. Quartz crystals are also found inside test and measurement equipment, such as counters, signal generators, and oscilloscopes.
Quartz crystal oscillators were developed for high-stability frequency references during the 1920s and 1930s. Prior to crystals, radio stations controlled their frequency with tuned circuits, which could easily drift off frequency by 3–4 kHz.Since broadcast stations were assigned frequencies only 10 kHz apart, interference between adjacent stations due to frequency drift was a common problem. In 1925 Westinghouse installed a crystal oscillator in its flagship station KDKA, and by 1926 quartz crystals were used to control the frequency of many broadcasting stations and were popular with amateur radio operators.In 1928, Warren Marrison of Bell Telephone Laboratories developed the first quartz-crystal clock. With accuracies of up to 1 second in 30 years (30 ms/y, or 10−7),quartz clocks replaced precision pendulum clocks as the world's most accurate timekeepers until atomic clocks were developed in the 1950s. Using the early work at Bell Labs, AT&T eventually established their Frequency Control Products division, later spun off and known today as Vectron International.
A number of firms started producing quartz crystals for electronic use during this time. Using what are now considered primitive methods, about 100,000 crystal units were produced in the United States during 1939. Through World War II crystals were made from natural quartz crystal, virtually all from Brazil. Shortages of crystals during the war caused by the demand for accurate frequency control of military and naval radios and radars spurred postwar research into culturing synthetic quartz, and by 1950 a hydrothermal process for growing quartz crystals on a commercial scale was developed at Bell Laboratories. By the 1970s virtually all crystals used in electronics were synthetic.
In 1968, Juergen Staudte invented a photolithographic process for manufacturing quartz crystal oscillators while working at North American Aviation (now Rockwell) that allowed them to be made small enough for portable products like watches.
Although crystal oscillators still most commonly use quartz crystals, devices using other materials are becoming more common, such as ceramic resonators.
A crystal oscillator, particularly one made of quartz crystal, works by being distorted by an electric field when voltage is applied to an electrode near or on the crystal. This property is known as electrostriction or inverse piezoelectricity. When the field is removed, the quartz - which oscillates in a precise frequency - generates an electric field as it returns to its previous shape, and this can generate a voltage. The result is that a quartz crystal behaves like an RLC circuit.
Quartz crystals are manufactured for frequencies from a few tens of kilohertz to hundreds of megahertz. More than two billion crystals are manufactured annually. Most are used for consumer devices such as wristwatches, clocks, radios, computers, and cellphones. Quartz crystals are also found inside test and measurement equipment, such as counters, signal generators, and oscilloscopes.
History
Piezoelectricity was discovered by Jacques and Pierre Curie in 1880. Paul Langevin first investigated quartz resonators for use in sonar during World War I. The first crystal-controlled oscillator, using a crystal of Rochelle salt, was built in 1917 and patented in 1918 by Alexander M. Nicholson at Bell Telephone Laboratories, although his priority was disputed by Walter Guyton Cady.Cady built the first quartz crystal oscillator in 1921.Other early innovators in quartz crystal oscillators include G. W. Pierce and Louis Essen.Quartz crystal oscillators were developed for high-stability frequency references during the 1920s and 1930s. Prior to crystals, radio stations controlled their frequency with tuned circuits, which could easily drift off frequency by 3–4 kHz.Since broadcast stations were assigned frequencies only 10 kHz apart, interference between adjacent stations due to frequency drift was a common problem. In 1925 Westinghouse installed a crystal oscillator in its flagship station KDKA, and by 1926 quartz crystals were used to control the frequency of many broadcasting stations and were popular with amateur radio operators.In 1928, Warren Marrison of Bell Telephone Laboratories developed the first quartz-crystal clock. With accuracies of up to 1 second in 30 years (30 ms/y, or 10−7),quartz clocks replaced precision pendulum clocks as the world's most accurate timekeepers until atomic clocks were developed in the 1950s. Using the early work at Bell Labs, AT&T eventually established their Frequency Control Products division, later spun off and known today as Vectron International.
A number of firms started producing quartz crystals for electronic use during this time. Using what are now considered primitive methods, about 100,000 crystal units were produced in the United States during 1939. Through World War II crystals were made from natural quartz crystal, virtually all from Brazil. Shortages of crystals during the war caused by the demand for accurate frequency control of military and naval radios and radars spurred postwar research into culturing synthetic quartz, and by 1950 a hydrothermal process for growing quartz crystals on a commercial scale was developed at Bell Laboratories. By the 1970s virtually all crystals used in electronics were synthetic.
In 1968, Juergen Staudte invented a photolithographic process for manufacturing quartz crystal oscillators while working at North American Aviation (now Rockwell) that allowed them to be made small enough for portable products like watches.
Although crystal oscillators still most commonly use quartz crystals, devices using other materials are becoming more common, such as ceramic resonators.
Sunday, 27 August 2017
Inductors
An inductor, also called a coil or reactor, is a passive two-terminal electrical component that stores electrical energy in a magnetic field when electric current flows through it.An inductor typically consists of an electric conductor, such as a wire, that is wound into a coil around a core.
When the current flowing through an inductor changes, the time-varying magnetic field induces a voltage in the conductor, described by Faraday's law of induction. According to Lenz's law, the direction of induced electromotive force (e.m.f.) opposes the change in current that created it. As a result, inductors oppose any changes in current through them.
An inductor is characterized by its inductance, which is the ratio of the voltage to the rate of change of current. In the International System of Units (SI), the unit of inductance is the henry (H). Inductors have values that typically range from 1 µH (10−6H) to 1 H. Many inductors have a magnetic core made of iron or ferrite inside the coil, which serves to increase the magnetic field and thus the inductance. Along with capacitors and resistors, inductors are one of the three passive linear circuit elements that make up electronic circuits. Inductors are widely used in alternating current (AC) electronic equipment, particularly in radio equipment. They are used to block AC while allowing DC to pass; inductors designed for this purpose are called chokes. They are also used in electronic filters to separate signals of different frequencies, and in combination with capacitors to make tuned circuits, used to tune radio and TV receivers.
The dual of the inductor is the capacitor, which stores energy in an electric field rather than a magnetic field. Its current–voltage relation is obtained by exchanging current and voltage in the inductor equations and replacing L with the capacitance C.
An inductor connected to a capacitor forms a tuned circuit, which acts as a resonator for oscillating current. Tuned circuits are widely used in radio frequency equipment such as radio transmitters and receivers, as narrow bandpass filters to select a single frequency from a composite signal, and in electronic oscillators to generate sinusoidal signals.
Two (or more) inductors in proximity that have coupled magnetic flux (mutual inductance) form a transformer, which is a fundamental component of every electric utility power grid. The efficiency of a transformer may decrease as the frequency increases due to eddy currents in the core material and skin effect on the windings. The size of the core can be decreased at higher frequencies. For this reason, aircraft use 400 hertz alternating current rather than the usual 50 or 60 hertz, allowing a great saving in weight from the use of smaller transformers.
Inductors are also employed in electrical transmission systems, where they are used to limit switching currents and fault currents. In this field, they are more commonly referred to as reactors.
Because inductors have complicated side effects (detailed below) which cause them to depart from ideal behavior, because they can radiate electromagnetic interference (EMI), and most of all because of their bulk which prevents them from being integrated on semiconductor chips, the use of inductors is declining in modern electronic devices, particularly compact portable devices. Real inductors are increasingly being replaced by active circuits such as the gyrator which can synthesize inductance using capacitors.
Inductors with ferromagnetic cores have additional energy losses due to hysteresis and eddy currents in the core, which increase with frequency. At high currents, iron core inductors also show gradual departure from ideal behavior due to nonlinearity caused by magnetic saturation of the core. An inductor may radiate electromagnetic energy into surrounding space and circuits, and may absorb electromagnetic emissions from other circuits, causing electromagnetic interference (EMI). For real-world inductor applications, these parasitic parameters may be as important as the inductance.
When the current flowing through an inductor changes, the time-varying magnetic field induces a voltage in the conductor, described by Faraday's law of induction. According to Lenz's law, the direction of induced electromotive force (e.m.f.) opposes the change in current that created it. As a result, inductors oppose any changes in current through them.
An inductor is characterized by its inductance, which is the ratio of the voltage to the rate of change of current. In the International System of Units (SI), the unit of inductance is the henry (H). Inductors have values that typically range from 1 µH (10−6H) to 1 H. Many inductors have a magnetic core made of iron or ferrite inside the coil, which serves to increase the magnetic field and thus the inductance. Along with capacitors and resistors, inductors are one of the three passive linear circuit elements that make up electronic circuits. Inductors are widely used in alternating current (AC) electronic equipment, particularly in radio equipment. They are used to block AC while allowing DC to pass; inductors designed for this purpose are called chokes. They are also used in electronic filters to separate signals of different frequencies, and in combination with capacitors to make tuned circuits, used to tune radio and TV receivers.
Description
An electric current flowing through a conductor generates a magnetic field surrounding it. Any changes of current and therefore in the magnetic flux through the cross-section of the inductor creates an opposing electromotive force in the conductor. The inductance (L) characterizes this behavior of an inductor and is defined in terms of that opposing electromotive force or its generated magnetic flux () and the corresponding electric current (i):Constitutive equation
Any change in the current through an inductor creates a changing flux, inducing a voltage across the inductor. By Faraday's law of induction, the voltage induced by any change in magnetic flux through the circuit is- (2)
The dual of the inductor is the capacitor, which stores energy in an electric field rather than a magnetic field. Its current–voltage relation is obtained by exchanging current and voltage in the inductor equations and replacing L with the capacitance C.
Applications
Inductors are used extensively in analog circuits and signal processing. Applications range from the use of large inductors in power supplies, which in conjunction with filter capacitors remove residual hums known as the mains hum or other fluctuations from the direct current output, to the small inductance of the ferrite bead or torus installed around a cable to prevent radio frequency interference from being transmitted down the wire. Inductors are used as the energy storage device in many switched-mode power supplies to produce DC current. The inductor supplies energy to the circuit to keep current flowing during the "off" switching periods.An inductor connected to a capacitor forms a tuned circuit, which acts as a resonator for oscillating current. Tuned circuits are widely used in radio frequency equipment such as radio transmitters and receivers, as narrow bandpass filters to select a single frequency from a composite signal, and in electronic oscillators to generate sinusoidal signals.
Two (or more) inductors in proximity that have coupled magnetic flux (mutual inductance) form a transformer, which is a fundamental component of every electric utility power grid. The efficiency of a transformer may decrease as the frequency increases due to eddy currents in the core material and skin effect on the windings. The size of the core can be decreased at higher frequencies. For this reason, aircraft use 400 hertz alternating current rather than the usual 50 or 60 hertz, allowing a great saving in weight from the use of smaller transformers.
Inductors are also employed in electrical transmission systems, where they are used to limit switching currents and fault currents. In this field, they are more commonly referred to as reactors.
Because inductors have complicated side effects (detailed below) which cause them to depart from ideal behavior, because they can radiate electromagnetic interference (EMI), and most of all because of their bulk which prevents them from being integrated on semiconductor chips, the use of inductors is declining in modern electronic devices, particularly compact portable devices. Real inductors are increasingly being replaced by active circuits such as the gyrator which can synthesize inductance using capacitors.
Lenz's law
The polarity (direction) of the induced voltage is given by Lenz's law, discovered by Heinrich Lenz in 1834, which states that it will be such as to oppose the change in current. For example, if the current through an inductor is increasing, the induced voltage will be positive at the terminal through which the current enters and negative at the terminal through which it leaves, tending to oppose the additional current. The energy from the external circuit necessary to overcome this potential "hill" is being stored in the magnetic field of the inductor; the inductor is said to be "charging" or "energizing". If the current is decreasing, the induced voltage will be negative at the terminal through which the current enters and positive at the terminal through which it leaves, tending to maintain the current. Energy from the magnetic field is being returned to the circuit; the inductor is said to be "discharging".Ideal and real inductors
In circuit theory, inductors are idealized as obeying the mathematical relation (2)above precisely. An "ideal inductor" has inductance, but no resistance or capacitance, and does not dissipate or radiate energy. However real inductors have side effects which cause their behavior to depart from this simple model. They have resistance (due to the resistance of the wire and energy losses in core material), and parasitic capacitance (due to the electric field between the turns of wire which are at slightly different potentials). At high frequencies the capacitance begins to affect the inductor's behavior; at some frequency, real inductors behave as resonant circuits, becoming self-resonant. Above the resonant frequency the capacitive reactance becomes the dominant part of the impedance. At higher frequencies, resistive losses in the windings increase due to skin effect and proximity effect.Inductors with ferromagnetic cores have additional energy losses due to hysteresis and eddy currents in the core, which increase with frequency. At high currents, iron core inductors also show gradual departure from ideal behavior due to nonlinearity caused by magnetic saturation of the core. An inductor may radiate electromagnetic energy into surrounding space and circuits, and may absorb electromagnetic emissions from other circuits, causing electromagnetic interference (EMI). For real-world inductor applications, these parasitic parameters may be as important as the inductance.
Saturday, 26 August 2017
Field-programmable gate array
A field-programmable gate array (FPGA) is an integrated circuit designed to be configured by a customer or a designer after manufacturing – hence "field-programmable". The FPGA configuration is generally specified using a hardware description language (HDL), similar to that used for an application-specific integrated circuit (ASIC). (Circuit diagrams were previously used to specify the configuration, as they were for ASICs, but this is increasingly rare.)
FPGAs contain an array of programmable logic blocks, and a hierarchy of reconfigurable interconnects that allow the blocks to be "wired together", like many logic gates that can be inter-wired in different configurations. Logic blocks can be configured to perform complex combinational functions, or merely simple logic gates like AND and XOR. In most FPGAs, logic blocks also include memory elements, which may be simple flip-flops or more complete blocks of memory.
Some FPGAs have analog features in addition to digital functions. The most common analog feature is programmable slew rate on each output pin, allowing the engineer to set low rates on lightly loaded pins that would otherwise ring or couple unacceptably, and to set higher rates on heavily loaded pins on high-speed channels that would otherwise run too slowly. Also common are quartz-crystal oscillators, on-chip resistance-capacitance oscillators, and phase-locked loops with embedded voltage-controlled oscillators used for clock generation and management and for high-speed serializer-deserializer (SERDES) transmit clocks and receiver clock recovery. Fairly common are differential comparators on input pins designed to be connected to differential signaling channels. A few "mixed signal FPGAs" have integrated peripheral analog-to-digital converters (ADCs) and digital-to-analog converters (DACs) with analog signal conditioning blocks allowing them to operate as a system-on-a-chip. Such devices blur the line between an FPGA, which carries digital ones and zeros on its internal programmable interconnect fabric, and field-programmable analog array (FPAA), which carries analog values on its internal programmable interconnect fabric.
In the late 1980s, the Naval Surface Warfare Center funded an experiment proposed by Steve Casselman to develop a computer that would implement 600,000 reprogrammable gates. Casselman was successful and a patent related to the system was issued in 1992.
Some of the industry's foundational concepts and technologies for programmable logic arrays, gates, and logic blocks are founded in patents awarded to David W. Page and LuVerne R. Peterson in 1985.
Altera was founded in 1983 and delivered the industry's first reprogrammable logic device in 1984 – the EP300 – which featured a quartz window in the package that allowed users to shine an ultra-violet lamp on the die to erase the EPROM cells that held the device configuration.
Xilinx co-founders Ross Freeman and Bernard Vonderschmitt invented the first commercially viable field-programmable gate array in 1985 – the XC2064. The XC2064 had programmable gates and programmable interconnects between gates, the beginnings of a new technology and market. The XC2064 had 64 configurable logic blocks (CLBs), with two three-input lookup tables (LUTs).More than 20 years later, Freeman was entered into the National Inventors Hall of Fame for his invention.
Altera and Xilinx continued unchallenged and quickly grew from 1985 to the mid-1990s, when competitors sprouted up, eroding significant market share. By 1993, Actel (now Microsemi) was serving about 18 percent of the market.By 2010, Altera (31 percent), Actel (10 percent) and Xilinx (36 percent) together represented approximately 77 percent of the FPGA market.
The 1990s were an explosive period of time for FPGAs, both in sophistication and the volume of production. In the early 1990s, FPGAs were primarily used in telecommunications and networking. By the end of the decade, FPGAs found their way into consumer, automotive, and industrial applications.
Technical design
Contemporary field-programmable gate arrays (FPGAs) have large resources of logic gates and RAM blocks to implement complex digital computations. As FPGA designs employ very fast I/Os and bidirectional data buses, it becomes a challenge to verify correct timing of valid data within setup time and hold time. Floor planning enables resource allocation within FPGAs to meet these time constraints. FPGAs can be used to implement any logical function that an ASIC could perform. The ability to update the functionality after shipping, partial re-configuration of a portion of the designand the low non-recurring engineering costs relative to an ASIC design (notwithstanding the generally higher unit cost), offer advantages for many applications.Some FPGAs have analog features in addition to digital functions. The most common analog feature is programmable slew rate on each output pin, allowing the engineer to set low rates on lightly loaded pins that would otherwise ring or couple unacceptably, and to set higher rates on heavily loaded pins on high-speed channels that would otherwise run too slowly. Also common are quartz-crystal oscillators, on-chip resistance-capacitance oscillators, and phase-locked loops with embedded voltage-controlled oscillators used for clock generation and management and for high-speed serializer-deserializer (SERDES) transmit clocks and receiver clock recovery. Fairly common are differential comparators on input pins designed to be connected to differential signaling channels. A few "mixed signal FPGAs" have integrated peripheral analog-to-digital converters (ADCs) and digital-to-analog converters (DACs) with analog signal conditioning blocks allowing them to operate as a system-on-a-chip. Such devices blur the line between an FPGA, which carries digital ones and zeros on its internal programmable interconnect fabric, and field-programmable analog array (FPAA), which carries analog values on its internal programmable interconnect fabric.
History
The FPGA industry sprouted from programmable read-only memory (PROM) and programmable logic devices (PLDs). PROMs and PLDs both had the option of being programmed in batches in a factory or in the field (field-programmable). However, programmable logic was hard-wired between logic gates.In the late 1980s, the Naval Surface Warfare Center funded an experiment proposed by Steve Casselman to develop a computer that would implement 600,000 reprogrammable gates. Casselman was successful and a patent related to the system was issued in 1992.
Some of the industry's foundational concepts and technologies for programmable logic arrays, gates, and logic blocks are founded in patents awarded to David W. Page and LuVerne R. Peterson in 1985.
Altera was founded in 1983 and delivered the industry's first reprogrammable logic device in 1984 – the EP300 – which featured a quartz window in the package that allowed users to shine an ultra-violet lamp on the die to erase the EPROM cells that held the device configuration.
Xilinx co-founders Ross Freeman and Bernard Vonderschmitt invented the first commercially viable field-programmable gate array in 1985 – the XC2064. The XC2064 had programmable gates and programmable interconnects between gates, the beginnings of a new technology and market. The XC2064 had 64 configurable logic blocks (CLBs), with two three-input lookup tables (LUTs).More than 20 years later, Freeman was entered into the National Inventors Hall of Fame for his invention.
Altera and Xilinx continued unchallenged and quickly grew from 1985 to the mid-1990s, when competitors sprouted up, eroding significant market share. By 1993, Actel (now Microsemi) was serving about 18 percent of the market.By 2010, Altera (31 percent), Actel (10 percent) and Xilinx (36 percent) together represented approximately 77 percent of the FPGA market.
The 1990s were an explosive period of time for FPGAs, both in sophistication and the volume of production. In the early 1990s, FPGAs were primarily used in telecommunications and networking. By the end of the decade, FPGAs found their way into consumer, automotive, and industrial applications.
Friday, 25 August 2017
circuit prototyping
Circuit Prototyping Boards
There are a few ways to prototype your circuit designs. We’ll cover solderless breadboards, perfboard, and manufactured PCBs. Each one has its own set of pros and cons.Solderless breadboards:
Pros: one-time expense, quick and easy to build and modify circuit designs.Cons: Contacts eventually weaken, not permanent, hard to do with surface mount (SMT) parts.
Perfboard:
Pros: not expensive, sort of permanent, easy to modify circuit design, SMT boards are available.Cons: more time consuming, need to solder.
Manufactured PCB:
Pros: Professional look and feel, through-hole and SMT available, multiple layers available.Cons: manufacturing time usually takes at least a week, harder to modify circuit design, most expensive.
As a side note on making PCBs, you can also buy kits and etch your own boards. If you go this route, you’ll have to work with harsh chemicals. This can be messy, smelly, and dangerous. The good thing is you get your boards quickly and can always etch another if you make a mistake.
Draw a Part Layout Before Starting
Soldering components to a perfboard or making a PCB only to find that all the parts don’t have enough room to fit properly is a real bummer.To avoid this, draw a part layout before you get started to be sure everything fits. If you use a PCB layout program, you can create layouts, print them, and use them as a template for perfboard prototyping.
Different Colours for Different Wires
Use different colours for making group connections like a data bus or a power bus. For example, make the data bus wires all yellow and the power wires all blue. This will help you keep track of your connections.And if you need to go back to the circuit design later to troubleshoot or modify it this will make your life a lot easier.
Use IC Sockets
Use sockets for all your integrated circuits. Start out with all sockets unpopulated so you can test for proper voltage on all points of the circuit. This can help prevent you from blowing out any parts if you’ve made a mistake, like swapping power and ground.Once everything is checked and good to go, pop the ICs into the sockets.
Perfboad Layout and Circuit Design
When using perfboard that has not been made for a specific enclosure (some are), spend time cutting it and fitting it to whatever enclosure you pick. If not, you run the risk of later finding a component mounted in such a way that it blocks the hole you needed to put there.Thursday, 24 August 2017
EPROM
An EPROM (rarely EROM), or erasable programmable read-only memory, is a type of memory chip that retains its data when its power supply is switched off. Computer memory that can retrieve stored data after a power supply has been turned off and back on is called non-volatile. It is an array of floating-gate transistors individually programmed by an electronic device that supplies higher voltages than those normally used in digital circuits. Once programmed, an EPROM can be erased by exposing it to strong ultraviolet light source (such as from a mercury-vapor light). EPROMs are easily recognizable by the transparent fused quartz window in the top of the package, through which the silicon chip is visible, and which permits exposure to ultraviolet light during erasing.
Development of the EPROM memory cell started with investigation of faulty integrated circuits where the gate connections of transistors had broken. Stored charge on these isolated gates changed their properties. The EPROM was invented by Dov Frohman of Intel in 1971, who was awarded US patent 3660819in 1972.
Each storage location of an EPROM consists of a single field-effect transistor. Each field-effect transistor consists of a channel in the semiconductor body of the device. Source and drain contacts are made to regions at the end of the channel. An insulating layer of oxide is grown over the channel, then a conductive (silicon or aluminum) gate electrode is deposited, and a further thick layer of oxide is deposited over the gate electrode. The floating gate electrode has no connections to other parts of the integrated circuit and is completely insulated by the surrounding layers of oxide. A control gate electrode is deposited and further oxide covers it.
To retrieve data from the EPROM, the address represented by the values at the address pins of the EPROM is decoded and used to connect one word (usually an 8-bit byte) of storage to the output buffer amplifiers. Each bit of the word is a 1 or 0, depending on the storage transistor being switched on or off, conducting or non-conducting.
The switching state of the field-effect transistor is controlled by the voltage on the control gate of the transistor. Presence of a voltage on this gate creates a conductive channel in the transistor, switching it on. In effect, the stored charge on the floating gate allows the threshold voltage of the transistor to be programmed.
Storing data in the memory requires selecting a given address and applying a higher voltage to the transistors. This creates an avalanche discharge of electrons, which have enough energy to pass through the insulating oxide layer and accumulate on the gate electrode. When the high voltage is removed, the electrons are trapped on the electrode. Because of the high insulation value of the silicon oxide surrounding the gate, the stored charge cannot readily leak away and the data can be retained for decades.
The programming process is not electrically reversible. To erase the data stored in the array of transistors, ultraviolet light is directed onto the die. Photons of the UV light cause ionization within the silicon oxide, which allow the stored charge on the floating gate to dissipate. Since the whole memory array is exposed, all the memory is erased at the same time. The process takes several minutes for UV lamps of convenient sizes; sunlight would erase a chip in weeks, and indoor fluorescent lighting over several years.Generally, the EPROMs must be removed from equipment to be erased, since it is not usually practical to build in a UV lamp to erase parts in-circuit. The Electrically Erasable Programmable Read-Only Memory (EEPROM) was developed to provide an electrical erase function and has now mostly displaced ultraviolet-erased parts.
Some microcontrollers, from before the era of EEPROMs and flash memory, use an on-chip EPROM to store their program. Such microcontrollers include some versions of the Intel 8048, the Freescale 68HC11, and the "C" versions of the PIC microcontroller. Like EPROM chips, such microcontrollers came in windowed (expensive) versions that were used for debugging and program development. The same chip came in (somewhat cheaper) opaque OTP packages for production. Leaving the die of such a chip exposed to light can also change behavior in unexpected ways when moving from a windowed part used for development to a non-windowed part for production.
Development of the EPROM memory cell started with investigation of faulty integrated circuits where the gate connections of transistors had broken. Stored charge on these isolated gates changed their properties. The EPROM was invented by Dov Frohman of Intel in 1971, who was awarded US patent 3660819in 1972.
Each storage location of an EPROM consists of a single field-effect transistor. Each field-effect transistor consists of a channel in the semiconductor body of the device. Source and drain contacts are made to regions at the end of the channel. An insulating layer of oxide is grown over the channel, then a conductive (silicon or aluminum) gate electrode is deposited, and a further thick layer of oxide is deposited over the gate electrode. The floating gate electrode has no connections to other parts of the integrated circuit and is completely insulated by the surrounding layers of oxide. A control gate electrode is deposited and further oxide covers it.
To retrieve data from the EPROM, the address represented by the values at the address pins of the EPROM is decoded and used to connect one word (usually an 8-bit byte) of storage to the output buffer amplifiers. Each bit of the word is a 1 or 0, depending on the storage transistor being switched on or off, conducting or non-conducting.
The switching state of the field-effect transistor is controlled by the voltage on the control gate of the transistor. Presence of a voltage on this gate creates a conductive channel in the transistor, switching it on. In effect, the stored charge on the floating gate allows the threshold voltage of the transistor to be programmed.
Storing data in the memory requires selecting a given address and applying a higher voltage to the transistors. This creates an avalanche discharge of electrons, which have enough energy to pass through the insulating oxide layer and accumulate on the gate electrode. When the high voltage is removed, the electrons are trapped on the electrode. Because of the high insulation value of the silicon oxide surrounding the gate, the stored charge cannot readily leak away and the data can be retained for decades.
The programming process is not electrically reversible. To erase the data stored in the array of transistors, ultraviolet light is directed onto the die. Photons of the UV light cause ionization within the silicon oxide, which allow the stored charge on the floating gate to dissipate. Since the whole memory array is exposed, all the memory is erased at the same time. The process takes several minutes for UV lamps of convenient sizes; sunlight would erase a chip in weeks, and indoor fluorescent lighting over several years.Generally, the EPROMs must be removed from equipment to be erased, since it is not usually practical to build in a UV lamp to erase parts in-circuit. The Electrically Erasable Programmable Read-Only Memory (EEPROM) was developed to provide an electrical erase function and has now mostly displaced ultraviolet-erased parts.
Application
For large volumes of parts (thousands of pieces or more), mask-programmed ROMs are the lowest cost devices to produce. However, these require many weeks lead time to make, since the artwork for an IC mask layer must be altered to store data on the ROMs. Initially, it was thought that the EPROM would be too expensive for mass production use and that it would be confined to development only. It was soon found that small-volume production was economical with EPROM parts, particularly when the advantage of rapid upgrades of firmware was considered.Some microcontrollers, from before the era of EEPROMs and flash memory, use an on-chip EPROM to store their program. Such microcontrollers include some versions of the Intel 8048, the Freescale 68HC11, and the "C" versions of the PIC microcontroller. Like EPROM chips, such microcontrollers came in windowed (expensive) versions that were used for debugging and program development. The same chip came in (somewhat cheaper) opaque OTP packages for production. Leaving the die of such a chip exposed to light can also change behavior in unexpected ways when moving from a windowed part used for development to a non-windowed part for production.
Details
As the quartz window is expensive to make, OTP (one-time programmable) chips were introduced; here, the die is mounted in an opaque package so it cannot be erased after programming – this also eliminates the need to test the erase function, further reducing cost. OTP versions of both EPROMs and EPROM-based microcontrollers are manufactured. However, OTP EPROM (whether separate or part of a larger chip) is being increasingly replaced by EEPROM for small sizes, where the cell cost isn't too important, and flash for larger sizes.
A programmed EPROM retains its data for a minimum of ten to twenty years, with many still retaining data after 35 or more years, and can be read an unlimited number of times without affecting the lifetime. The erasing window must be kept covered with an opaque label to prevent accidental erasure by the UV found in sunlight or camera flashes. Old PC BIOS chips were often EPROMs, and the erasing window was often covered with an adhesive label containing the BIOS publisher's name, the BIOS revision, and a copyright notice. Often this label was foil-backed to ensure its opacity to UV.
Erasure of the EPROM begins to occur with wavelengths shorter than 400 nm. Exposure time for sunlight of one week or three years for room fluorescent lighting may cause erasure. The recommended erasure procedure is exposure to UV light at 253.7 nm of at least 15 W-sec/cm² for 20 to 30 minutes, with the lamp at a distance of about 2.5 cm.
Erasure can also be accomplished with X-rays:
A programmed EPROM retains its data for a minimum of ten to twenty years, with many still retaining data after 35 or more years, and can be read an unlimited number of times without affecting the lifetime. The erasing window must be kept covered with an opaque label to prevent accidental erasure by the UV found in sunlight or camera flashes. Old PC BIOS chips were often EPROMs, and the erasing window was often covered with an adhesive label containing the BIOS publisher's name, the BIOS revision, and a copyright notice. Often this label was foil-backed to ensure its opacity to UV.
Erasure of the EPROM begins to occur with wavelengths shorter than 400 nm. Exposure time for sunlight of one week or three years for room fluorescent lighting may cause erasure. The recommended erasure procedure is exposure to UV light at 253.7 nm of at least 15 W-sec/cm² for 20 to 30 minutes, with the lamp at a distance of about 2.5 cm.
Erasure can also be accomplished with X-rays:
Erasure, however, has to be accomplished by non-electrical methods, since the gate electrode is not accessible electrically. Shining ultraviolet light on any part of an unpackaged device causes a photocurrent to flow from the floating gate back to the silicon substrate, thereby discharging the gate to its initial, uncharged condition (photoelectric effect). This method of erasure allows complete testing and correction of a complex memory array before the package is finally sealed. Once the package is sealed, information can still be erased by exposing it to X radiation in excess of 5*104 rads, a dose which is easily attained with commercial X-ray generators.
In other words, to erase your EPROM, you would first have to X-ray it and then put it in an oven at about 600 degrees Celsius (to anneal semiconductor alterations caused by the X-rays). The effects of this process on the reliability of the part would have required extensive testing so they decided on the window instead.EPROMs had a limited but large number of erase cycles; the silicon dioxide around the gates would accumulate damage from each cycle, making the chip unreliable after several thousand cycles. EPROM programming is slow compared to other forms of memory. Because higher-density parts have little exposed oxide between the layers of interconnects and gate, ultraviolet erasing becomes less practical for very large memories. Even dust inside the package can prevent some cells from being erased.
Wednesday, 23 August 2017
Application-specific integrated circuit
An application-specific integrated circuit (ASIC) /ˈeɪsɪk/, is an integrated circuit (IC) customized for a particular use, rather than intended for general-purpose use. For example, a chip designed to run in a digital voice recorder or a high-efficiency Bitcoin miner is an ASIC. Application-specific standard products (ASSPs) are intermediate between ASICs and industry standard integrated circuits like the 7400 or the 4000 series.
As feature sizes have shrunk and design tools improved over the years, the maximum complexity (and hence functionality) possible in an ASIC has grown from 5,000 gates to over 100 million. Modern ASICs often include entire microprocessors, memory blocks including ROM, RAM, EEPROM, flash memory and other large building blocks. Such an ASIC is often termed a SoC (system-on-chip). Designers of digital ASICs often use a hardware description language (HDL), such as Verilog or VHDL, to describe the functionality of ASICs.
Field-programmable gate arrays (FPGA) are the modern-day technology for building a breadboard or prototype from standard parts; programmable logic blocks and programmable interconnects allow the same FPGA to be used in many different applications. For smaller designs or lower production volumes, FPGAs may be more cost effective than an ASIC design even in production. The non-recurring engineering (NRE) cost of an ASIC can run into the millions of dollars.
Customization occurred by varying the metal interconnect mask. Gate arrays had complexities of up to a few thousand gates. Later versions became more generalized, with different base dies customised by both metal and polysilicon layers. Some base dies include RAM elements.
By the late 1990s, logic synthesis tools became available. Such tools could compile HDL descriptions into a gate-level netlist. Standard-cell integrated circuits (ICs) are designed in the following conceptual stages, although these stages overlap significantly in practice:
The design steps (or flow) are also common to standard product design. The significant difference is that standard-cell design uses the manufacturer's cell libraries that have been used in potentially hundreds of other design implementations and therefore are of much lower risk than full custom design. Standard cells produce a design density that is cost effective, and they can also integrate IP cores and SRAM (Static Random Access Memory) effectively, unlike Gate Arrays.
Gate-array design is a manufacturing method in which the diffused layers, i.e. transistors and other active devices, are predefined and wafers containing such devices are held in stock prior to metallization—in other words, unconnected. The physical design process then defines the interconnections of the final device. For most ASIC manufacturers, this consists of from two to as many as nine metal layers, each metal layer running perpendicular to the one below it. Non-recurring engineering costs are much lower, as photolithographic masks are required only for the metal layers, and production cycles are much shorter, as metallization is a comparatively quick process.
Gate-array ASICs are always a compromise as mapping a given design onto what a manufacturer held as a stock wafer never gives 100% utilization. Often difficulties in routing the interconnect require migration onto a larger array device with consequent increase in the piece part price. These difficulties are often a result of the layout software used to develop the interconnect.
Pure, logic-only gate-array design is rarely implemented by circuit designers today, having been replaced almost entirely by field-programmable devices, such as field-programmable gate arrays (FPGAs), which can be programmed by the user and thus offer minimal tooling charges non-recurring engineering, only marginally increased piece part cost, and comparable performance. Today, gate arrays are evolving into structured ASICs that consist of a large IP core like a CPU, DSP unit, peripherals, standard interfaces, integrated memories SRAM, and a block of reconfigurable, uncommited logic. This shift is largely because ASIC devices are capable of integrating such large blocks of system functionality and "system-on-a-chip" requires far more than just logic blocks.
In their frequent usages in the field, the terms "gate array" and "semi-custom" are synonymous. Process engineers more commonly use the term "semi-custom", while "gate-array" is more commonly used by logic (or gate-level) designers.
As feature sizes have shrunk and design tools improved over the years, the maximum complexity (and hence functionality) possible in an ASIC has grown from 5,000 gates to over 100 million. Modern ASICs often include entire microprocessors, memory blocks including ROM, RAM, EEPROM, flash memory and other large building blocks. Such an ASIC is often termed a SoC (system-on-chip). Designers of digital ASICs often use a hardware description language (HDL), such as Verilog or VHDL, to describe the functionality of ASICs.
Field-programmable gate arrays (FPGA) are the modern-day technology for building a breadboard or prototype from standard parts; programmable logic blocks and programmable interconnects allow the same FPGA to be used in many different applications. For smaller designs or lower production volumes, FPGAs may be more cost effective than an ASIC design even in production. The non-recurring engineering (NRE) cost of an ASIC can run into the millions of dollars.
History
The initial ASICs used gate array technology. An early successful commercial application was the gate array circuitry found in the 8-bit ZX81 and ZX Spectrum low-end personal computers, introduced in 1981 and 1982. These were used by Sinclair Research (UK) essentially as a low-cost I/O solution aimed at handling the computer's graphics.Customization occurred by varying the metal interconnect mask. Gate arrays had complexities of up to a few thousand gates. Later versions became more generalized, with different base dies customised by both metal and polysilicon layers. Some base dies include RAM elements.
Standard-cell designs
In the mid-1980s, a designer would choose an ASIC manufacturer and implement their design using the design tools available from the manufacturer. While third-party design tools were available, there was not an effective link from the third-party design tools to the layout and actual semiconductor process performance characteristics of the various ASIC manufacturers. Most designers ended up using factory-specific tools to complete the implementation of their designs. A solution to this problem, which also yielded a much higher density device, was the implementation of standard cells. Every ASIC manufacturer could create functional blocks with known electrical characteristics, such as propagation delay, capacitance and inductance, that could also be represented in third-party tools. Standard-cell design is the utilization of these functional blocks to achieve very high gate density and good electrical performance. Standard-cell design fits between Gate Array and Full Custom design in terms of both its non-recurring engineering and recurring component cost.By the late 1990s, logic synthesis tools became available. Such tools could compile HDL descriptions into a gate-level netlist. Standard-cell integrated circuits (ICs) are designed in the following conceptual stages, although these stages overlap significantly in practice:
- A team of design engineers starts with a non-formal understanding of the required functions for a new ASIC, usually derived from requirements analysis.
- The design team constructs a description of an ASIC (application specific integrated circuits) to achieve these goals using an HDL. This process is analogous to writing a computer program in a high-level language. This is usually called the RTL (register-transfer level) design.
- Suitability for purpose is verified by functional verification. This may include such techniques as logic simulation, formal verification, emulation, or creating an equivalent pure software model (see Simics, for example). Each technique has advantages and disadvantages, and often several methods are used.
- Logic synthesis transforms the RTL design into a large collection of lower-level constructs called standard cells. These constructs are taken from a standard-cell library consisting of pre-characterized collections of gates (such as 2 input nor, 2 input nand, inverters, etc.). The standard cells are typically specific to the planned manufacturer of the ASIC. The resulting collection of standard cells, plus the needed electrical connections between them, is called a gate-level netlist.
- The gate-level netlist is next processed by a placement tool which places the standard cells onto a region representing the final ASIC. It attempts to find a placement of the standard cells, subject to a variety of specified constraints.
- The routing tool takes the physical placement of the standard cells and uses the netlist to create the electrical connections between them. Since the search space is large, this process will produce a “sufficient” rather than “globally optimal” solution. The output is a file which can be used to create a set of photomasks enabling a semiconductor fabrication facility (commonly called a 'fab') to produce physical ICs.
- Given the final layout, circuit extraction computes the parasitic resistances and capacitances. In the case of a digital circuit, this will then be further mapped into delay information, from which the circuit performance can be estimated, usually by static timing analysis. This, and other final tests such as design rule checking and power analysis (collectively called signoff) are intended to ensure that the device will function correctly over all extremes of the process, voltage and temperature. When this testing is complete the photomask information is released for chip fabrication.
The design steps (or flow) are also common to standard product design. The significant difference is that standard-cell design uses the manufacturer's cell libraries that have been used in potentially hundreds of other design implementations and therefore are of much lower risk than full custom design. Standard cells produce a design density that is cost effective, and they can also integrate IP cores and SRAM (Static Random Access Memory) effectively, unlike Gate Arrays.
Gate-array design is a manufacturing method in which the diffused layers, i.e. transistors and other active devices, are predefined and wafers containing such devices are held in stock prior to metallization—in other words, unconnected. The physical design process then defines the interconnections of the final device. For most ASIC manufacturers, this consists of from two to as many as nine metal layers, each metal layer running perpendicular to the one below it. Non-recurring engineering costs are much lower, as photolithographic masks are required only for the metal layers, and production cycles are much shorter, as metallization is a comparatively quick process.
Gate-array ASICs are always a compromise as mapping a given design onto what a manufacturer held as a stock wafer never gives 100% utilization. Often difficulties in routing the interconnect require migration onto a larger array device with consequent increase in the piece part price. These difficulties are often a result of the layout software used to develop the interconnect.
Pure, logic-only gate-array design is rarely implemented by circuit designers today, having been replaced almost entirely by field-programmable devices, such as field-programmable gate arrays (FPGAs), which can be programmed by the user and thus offer minimal tooling charges non-recurring engineering, only marginally increased piece part cost, and comparable performance. Today, gate arrays are evolving into structured ASICs that consist of a large IP core like a CPU, DSP unit, peripherals, standard interfaces, integrated memories SRAM, and a block of reconfigurable, uncommited logic. This shift is largely because ASIC devices are capable of integrating such large blocks of system functionality and "system-on-a-chip" requires far more than just logic blocks.
In their frequent usages in the field, the terms "gate array" and "semi-custom" are synonymous. Process engineers more commonly use the term "semi-custom", while "gate-array" is more commonly used by logic (or gate-level) designers.
Tuesday, 22 August 2017
varistor
A varistor is an electronic component with an electrical resistance that varies with the applied voltage. Also known as a voltage-dependent resistor (VDR), it has a nonlinear, non-ohmic current–voltage characteristic that is similar to that of a diode. In contrast to a diode however, it has the same characteristic for both directions of traversing current. At low voltage it has a high electrical resistance which decreases as the voltage is raised.
Varistors are used as control or compensation elements in circuits either to provide optimal operating conditions or to protect against excessive transient voltages. When used as protection devices, they shunt the current created by the excessive voltage away from sensitive components when triggered.
The name varistor is a portmanteau of varying resistor. The term is only used for non-ohmic varying resistors. Variable resistors, such as the potentiometer and the rheostat, have ohmic characteristics.
A varistor provides no equipment protection from inrush current surges (during equipment startup), from overcurrent (created by a short circuit), or from voltage sags (also known as a brownout); it neither senses nor affects such events. Susceptibility of electronic equipment to these other power disturbances is defined by other aspects of the system design, either inside the equipment itself or externally by means such as a UPS, a voltage regulator or a surge protector with built-in overvoltage protection (which typically consists of a voltage-sensing circuit and a relay for disconnecting the AC input when the voltage reaches a danger threshold).
Another type of transient suppressor is the gas-tube suppressor. This is a type of spark gap that may use air or an inert gas mixture and often, a small amount of radioactive material such as Ni-63, to provide a more consistent breakdown voltage and reduce response time. Unfortunately, these devices may have higher breakdown voltages and longer response times than varistors. However, they can handle significantly higher fault currents and withstand multiple high-voltage hits (for example, from lightning) without significant degradation.
Varistors are used as control or compensation elements in circuits either to provide optimal operating conditions or to protect against excessive transient voltages. When used as protection devices, they shunt the current created by the excessive voltage away from sensitive components when triggered.
The name varistor is a portmanteau of varying resistor. The term is only used for non-ohmic varying resistors. Variable resistors, such as the potentiometer and the rheostat, have ohmic characteristics.
History
The development of the varistor, in form of a new type of rectifier based on a cuprous oxide layer on copper, originated in the work by L.O. Grondahl and P.H. Geiger in 1927. Another form made from silicon carbide by R. O. Grisdale in the early 1930s was used to guard telephone lines from lightning.Applications
To protect telecommunication lines, transient suppression devices such as 3 mil carbon blocks (IEEE C62.32), ultra-low capacitance varistors, and avalanche diodes are used. For higher frequencies, such as radio communication equipment, a gas discharge tube (GDT) may be utilized. A typical surge protector power strip is built using MOVs. Low-cost versions may use only one varistor, from the hot (live, active) to the neutral conductor. A better protector contains at least three varistors; one across each of the three pairs of conductors. In the United States, a power strip protector should have an Underwriters Laboratories (UL) 1449 3rd edition approval so that catastrophic MOV failure does not create a fire hazard.Specifications
Voltage rating
MOVs are specified according to the voltage range that they can tolerate without damage. Other important parameters are the varistor's energy rating in joules, operating voltage, response time, maximum current, and breakdown (clamping) voltage. Energy rating is often defined using standardized transients such as 8/20 microseconds or 10/1000 microseconds, where 8 microseconds is the transient's front time and 20 microseconds is the time to half value.Response time
The response time of the MOV is not standardized. The sub-nanosecond MOV response claim is based on the material's intrinsic response time, but will be slowed down by other factors such as the inductance of component leads and the mounting method. That response time is also qualified as insignificant when compared to a transient having an 8 µs rise-time, thereby allowing ample time for the device to slowly turn-on. When subjected to a very fast, <1 ns rise-time transient, response times for the MOV are in the 40–60 ns range.[Capacitance
Typical capacitance for consumer-sized (7–20 mm diameter) varistors are in the range of 100–2,500 pF. Smaller, lower-capacitance varistors are available with capacitance of ~1 pF for microelectronic protection, such as in cellular phones. These low-capacitance varistors are, however, unable to withstand large surge currents simply due to their compact PCB-mount size.Limitations
A MOV inside a TVSS device does not provide equipment with complete power protection. In particular, a MOV device provides no protection for the connected equipment from sustained over-voltages that may result in damage to that equipment as well as to the protector device. Other sustained and harmful overvoltages may be lower and therefore ignored by a MOV device.A varistor provides no equipment protection from inrush current surges (during equipment startup), from overcurrent (created by a short circuit), or from voltage sags (also known as a brownout); it neither senses nor affects such events. Susceptibility of electronic equipment to these other power disturbances is defined by other aspects of the system design, either inside the equipment itself or externally by means such as a UPS, a voltage regulator or a surge protector with built-in overvoltage protection (which typically consists of a voltage-sensing circuit and a relay for disconnecting the AC input when the voltage reaches a danger threshold).
Comparison to other transient suppressors
Another method for suppressing voltage spikes is the transient-voltage-suppression diode (TVS). Although diodes do not have as much capacity to conduct large surges as MOVs, diodes are not degraded by smaller surges and can be implemented with a lower "clamping voltage". MOVs degrade from repeated exposure to surges and generally have a higher "clamping voltage" so that leakage does not degrade the MOV. Both types are available over a wide range of voltages. MOVs tend to be more suitable for higher voltages, because they can conduct the higher associated energies at less cost.Another type of transient suppressor is the gas-tube suppressor. This is a type of spark gap that may use air or an inert gas mixture and often, a small amount of radioactive material such as Ni-63, to provide a more consistent breakdown voltage and reduce response time. Unfortunately, these devices may have higher breakdown voltages and longer response times than varistors. However, they can handle significantly higher fault currents and withstand multiple high-voltage hits (for example, from lightning) without significant degradation.
Multi-layer varistor
Multi-layer varistor (MLV) devices provide electrostatic discharge protection to electronic circuits from low to medium energy transients in sensitive equipment operating at 0-120 volts dc. They have peak current ratings from about 20 to 500 amperes, and peak energy ratings from 0.05 to 2.5 joules.Monday, 21 August 2017
chip on board
A bare chip that is mounted directly onto the printed circuit board (PCB). After the wires are attached, a glob of epoxy or plastic is used to cover the chip and its connections. The tape automated bonding (TAB) process is used to place the chip on the board.
Sunday, 20 August 2017
Banana Pi
The Banana Pi is a series of credit card-sized single-board computers based on a low-cost concept for inner software and hardware development and school software learning such as Scratch. Its hardware design was influenced by Raspberry Pi in 2013. It is produced by the Chinese company Shenzhen SINOVOIP Co.,Ltd.
Banana Pi software is compatible with Raspberry Pi boards. Banana Pi also can run NetBSD, Android, Ubuntu, Debian, Arch Linux, Raspbian operating systems, though the CPU complies with the requirements of the Debian
Banana Pi is the open source hardware and software platform which is designed to assist bananapi.org and banana-pi.org
Banana Pi software is compatible with Raspberry Pi boards. Banana Pi also can run NetBSD, Android, Ubuntu, Debian, Arch Linux, Raspbian operating systems, though the CPU complies with the requirements of the Debian
armhf
port. It uses the Allwinner SoC (system on chip) and as such is mostly covered by the linux-sunxi port.Banana Pi is the open source hardware and software platform which is designed to assist bananapi.org and banana-pi.org
Saturday, 19 August 2017
Beagleboard
The BeagleBoard is a low-power open-source hardware single-board computer produced by Texas Instruments in association with Digi-Key and Newark element14. The BeagleBoard was also designed with open source software development in mind, and as a way of demonstrating the Texas Instrument's OMAP3530 system-on-a-chip.The board was developed by a small team of engineers as an educational board that could be used in colleges around the world to teach open source hardware and software capabilities. It is also sold to the public under the Creative Commons share-alike license. The board was designed using Cadence OrCAD for schematics and Cadence Allegro for PCB manufacturing; no simulation software was used.
Built-in storage and memory are provided through a PoP chip that includes 256 MB of NAND flash memory and 256 MB of RAM (128 MB on earlier models).
The board uses up to 2 W of power and can be powered from the USB connector, or a separate 5 V power supply. Because of the low power consumption, no additional cooling or heat sinks are required.
Features
The BeagleBoard measures approximately 75 by 75 mm and has all the functionality of a basic computer. The OMAP3530 includes an ARM Cortex-A8 CPU (which can run Linux, Minix, FreeBSD,OpenBSD, RISC OS,or Symbian; Android is being ported), a TMS320C64x+ DSP for accelerated video and audio decoding, and an Imagination Technologies PowerVR SGX530 GPU to provide accelerated 2D and 3D rendering that supports OpenGL ES 2.0. Video out is provided through separate S-Video and HDMI connections. A single SD/MMC card slot supporting SDIO, a USB On-The-Go port, an RS-232 serial connection, a JTAG connection, and two stereo 3.5 mm jacks for audio in/out are provided.Built-in storage and memory are provided through a PoP chip that includes 256 MB of NAND flash memory and 256 MB of RAM (128 MB on earlier models).
The board uses up to 2 W of power and can be powered from the USB connector, or a separate 5 V power supply. Because of the low power consumption, no additional cooling or heat sinks are required.
Friday, 18 August 2017
Transformers
A transformer is an electrical device that transfers electrical energy between two or more circuits through electromagnetic induction. A varying current in one coil of the transformer produces a varying magnetic field, which in turn induces a voltage in a second coil. Power can be transferred between the two coils through the magnetic field, without a metallic connection between the two circuits. Faraday's law of induction discovered in 1831 described this effect. Transformers are used to increase or decrease the alternating voltages in electric power applications.
Since the invention of the first constant-potential transformer in 1885, transformers have become essential for the transmission, distribution, and utilization of alternating current electrical energy. A wide range of transformer designs is encountered in electronic and electric power applications. Transformers range in size from RF transformers less than a cubic centimeter in volume to units interconnecting the power grid weighing hundreds of tons.
Since the invention of the first constant-potential transformer in 1885, transformers have become essential for the transmission, distribution, and utilization of alternating current electrical energy. A wide range of transformer designs is encountered in electronic and electric power applications. Transformers range in size from RF transformers less than a cubic centimeter in volume to units interconnecting the power grid weighing hundreds of tons.
Thursday, 17 August 2017
Solenoid infomation
A solenoid is a device that converts energy into linear motion. This energy may come from an electromagnetic field, a pneumatic (air-powered) chamber, or a hydraulic (fluid-filled) cylinder. These devices are commonly found in electric bell assemblies, automotive starter systems, industrial air hammers, and many other machines that rely on a sudden burst of power to move a specific part.
In order to understand the underlying principle, a person can examine a typical pinball machine. At the beginning of play, a steel ball rests on a rubber-tipped plunger that is held in place by a compression spring, which means it has no energy to move the ball when at rest. The player's hand provides additional energy as the plunger assembly is pulled back. Upon release, the spring forces almost all of the plunger pin's kinetic energy on a small area of the steel ball. The ball is flung into the playing field and the pinball game begins. This manual plunger is a rudimentary example of a solenoid.
The difficulty with using manual pinball plungers on other machines is that someone must constantly pull the spring back and release the energy by hand. An improved solenoid would provide its own means of pulling back on the pin and releasing it under control. This is the principle behind a simple electric one, in which a metallic cylinder acts as the "plunger."
A compression spring holds this metal pin partially out of an electromagnetic housing. When power from a battery or electric generator flows around the electromagnet, the metal pin or cylinder is magnetically drawn inside the housing, much like the player's hand pulls the plunger back in the pinball example. When the electric current stops, the pin is released and the compression spring sends it forward with significant force. The pin may strike the inside of a bell or forcefully eject a part from a molding machine. Many electronic machines contain numerous solenoids.
Other types depend on compressed air for their power. A single piston may be placed in an airtight cylinder connected to a source of highly-compressed air. A strong internal spring may hold the piston in place until the air pressure has reached a predetermined level and then the piston is released. The compressed air is allowed to escape as the piston drives forward.
Because the energy released by a solenoid can be concentrated, pneumatic ones are popular for heavy tools and machining applications which require substantial power. A jackhammer is a good example of this type in action. The central piston is driven by air into the concrete, then the recoil of the hammer returns the piston to its original position.
An even more powerful solenoid uses hydraulics as its source of power. The piston or pin is seated in a cylinder filled with a hydraulic fluid. As this hydraulic fluid fills the cylinder, everything is pushed forward, including the piston or pin. As the piston travels towards a piece of metal or other target, the fluid buildup becomes very resistant to compression, and the piston will concentrate all of the cylinder's energy on whatever it encounters, even the heaviest titanium.
When the solenoid has released all of its energy, the hydraulic fluid drains out of the chamber and the piston is drawn back to its original position. This action can take place in a matter of seconds. This type is so powerful that it is generally used only for the heaviest projects. Wave pools use them to release the giant stoppers at the bottom of their holding tanks. Aircraft manufacturers use this type to bend titanium and other heavy metals
In order to understand the underlying principle, a person can examine a typical pinball machine. At the beginning of play, a steel ball rests on a rubber-tipped plunger that is held in place by a compression spring, which means it has no energy to move the ball when at rest. The player's hand provides additional energy as the plunger assembly is pulled back. Upon release, the spring forces almost all of the plunger pin's kinetic energy on a small area of the steel ball. The ball is flung into the playing field and the pinball game begins. This manual plunger is a rudimentary example of a solenoid.
The difficulty with using manual pinball plungers on other machines is that someone must constantly pull the spring back and release the energy by hand. An improved solenoid would provide its own means of pulling back on the pin and releasing it under control. This is the principle behind a simple electric one, in which a metallic cylinder acts as the "plunger."
A compression spring holds this metal pin partially out of an electromagnetic housing. When power from a battery or electric generator flows around the electromagnet, the metal pin or cylinder is magnetically drawn inside the housing, much like the player's hand pulls the plunger back in the pinball example. When the electric current stops, the pin is released and the compression spring sends it forward with significant force. The pin may strike the inside of a bell or forcefully eject a part from a molding machine. Many electronic machines contain numerous solenoids.
Other types depend on compressed air for their power. A single piston may be placed in an airtight cylinder connected to a source of highly-compressed air. A strong internal spring may hold the piston in place until the air pressure has reached a predetermined level and then the piston is released. The compressed air is allowed to escape as the piston drives forward.
Because the energy released by a solenoid can be concentrated, pneumatic ones are popular for heavy tools and machining applications which require substantial power. A jackhammer is a good example of this type in action. The central piston is driven by air into the concrete, then the recoil of the hammer returns the piston to its original position.
An even more powerful solenoid uses hydraulics as its source of power. The piston or pin is seated in a cylinder filled with a hydraulic fluid. As this hydraulic fluid fills the cylinder, everything is pushed forward, including the piston or pin. As the piston travels towards a piece of metal or other target, the fluid buildup becomes very resistant to compression, and the piston will concentrate all of the cylinder's energy on whatever it encounters, even the heaviest titanium.
When the solenoid has released all of its energy, the hydraulic fluid drains out of the chamber and the piston is drawn back to its original position. This action can take place in a matter of seconds. This type is so powerful that it is generally used only for the heaviest projects. Wave pools use them to release the giant stoppers at the bottom of their holding tanks. Aircraft manufacturers use this type to bend titanium and other heavy metals
Wednesday, 16 August 2017
Hall Effect Sensor
A Hall effect sensor is a transducer that varies its output voltage in response to a magnetic field. Hall effect sensors are used for proximity switching, positioning, speed detection, and current sensing applications.
In a hall-effect sensor a thin strip of metal has a current applied along it, in the presence of a magnetic field the electrons are deflected towards one edge of the metal strip, producing a voltage gradient across the short-side of the strip (perpendicular to the feed current). Inductive sensors are just a coil of wire, in the presence of a changing magnetic field a current will be induced in the coil, producing a voltage at its output. Hall effect sensors have the advantage that they can detect static (non-changing) magnetic fields.
In its simplest form, the sensor operates as an analog transducer, directly returning a voltage. With a known magnetic field, its distance from the Hall plate can be determined. Using groups of sensors, the relative position of the magnet can be deduced.
Frequently, a Hall sensor is combined with threshold detection so that it acts as and is called a switch. Commonly seen in industrial applications such as the pictured pneumatic cylinder, they are also used in consumer equipment; for example some computer printers use them to detect missing paper and open covers. They can also be used in computer keyboards applications that require ultra-high reliability.
Hall sensors are commonly used to time the speed of wheels and shafts, such as for internal combustion engine ignition timing, tachometers and anti-lock braking systems. They are used in brushless DC electric motors to detect the position of the permanent magnet. In the pictured wheel with two equally spaced magnets, the voltage from the sensor will peak twice for each revolution. This arrangement is commonly used to regulate the speed of disk drives
When the force on the charged particles from the electric field balances the force produced by magnetic field, the separation of them will stop. If the current is not changing, then the Hall voltage is a measure of the magnetic flux density. Basically, there are two kinds of Hall effect sensors. One is linear which means the output of voltage linearly depends on magnetic flux density; the other is called threshold which means there will be a sharp decrease of output voltage at each magnetic flux density.
When the Hall probe is held so that the magnetic field lines are passing at right angles through the sensor of the probe, the meter gives a reading of the value of magnetic flux density (B). A current is passed through the crystal which, when placed in a magnetic field has a "Hall effect" voltage developed across it. The Hall effect is seen when a conductor is passed through a uniform magnetic field. The natural electron drift of the charge carriers causes the magnetic field to apply a Lorentz force (the force exerted on a charged particle in an electromagnetic field) to these charge carriers. The result is what is seen as a charge separation, with a buildup of either positive or negative charges on the bottom or on the top of the plate. The crystal measures 5 mm square. The probe handle, being made of a non-ferrous material, has no disturbing effect on the field.
A Hall probe should be calibrated against a known value of magnetic field strength. For a solenoid the Hall probe is placed in the center.
In a hall-effect sensor a thin strip of metal has a current applied along it, in the presence of a magnetic field the electrons are deflected towards one edge of the metal strip, producing a voltage gradient across the short-side of the strip (perpendicular to the feed current). Inductive sensors are just a coil of wire, in the presence of a changing magnetic field a current will be induced in the coil, producing a voltage at its output. Hall effect sensors have the advantage that they can detect static (non-changing) magnetic fields.
In its simplest form, the sensor operates as an analog transducer, directly returning a voltage. With a known magnetic field, its distance from the Hall plate can be determined. Using groups of sensors, the relative position of the magnet can be deduced.
Frequently, a Hall sensor is combined with threshold detection so that it acts as and is called a switch. Commonly seen in industrial applications such as the pictured pneumatic cylinder, they are also used in consumer equipment; for example some computer printers use them to detect missing paper and open covers. They can also be used in computer keyboards applications that require ultra-high reliability.
Hall sensors are commonly used to time the speed of wheels and shafts, such as for internal combustion engine ignition timing, tachometers and anti-lock braking systems. They are used in brushless DC electric motors to detect the position of the permanent magnet. In the pictured wheel with two equally spaced magnets, the voltage from the sensor will peak twice for each revolution. This arrangement is commonly used to regulate the speed of disk drives
Working principle
When a beam of charged particles passes through a magnetic field, forces act on the particles and the beam is deflected from a straight path. The flow of electrons through a conductor is known as a beam of charged carriers. When a conductor is placed in a magnetic field perpendicular to the direction of the electrons, they will be deflected from a straight path. As a consequence, one plane of the conductor will become negatively charged and the opposite side will become positively charged. The voltage between these planes is called the Hall voltage.When the force on the charged particles from the electric field balances the force produced by magnetic field, the separation of them will stop. If the current is not changing, then the Hall voltage is a measure of the magnetic flux density. Basically, there are two kinds of Hall effect sensors. One is linear which means the output of voltage linearly depends on magnetic flux density; the other is called threshold which means there will be a sharp decrease of output voltage at each magnetic flux density.
Hall probe
A Hall probe contains an indium compound semiconductor crystal such as indium antimonide, mounted on an aluminum backing plate, and encapsulated in the probe head. The plane of the crystal is perpendicular to the probe handle. Connecting leads from the crystal are brought down through the handle to the circuit box.When the Hall probe is held so that the magnetic field lines are passing at right angles through the sensor of the probe, the meter gives a reading of the value of magnetic flux density (B). A current is passed through the crystal which, when placed in a magnetic field has a "Hall effect" voltage developed across it. The Hall effect is seen when a conductor is passed through a uniform magnetic field. The natural electron drift of the charge carriers causes the magnetic field to apply a Lorentz force (the force exerted on a charged particle in an electromagnetic field) to these charge carriers. The result is what is seen as a charge separation, with a buildup of either positive or negative charges on the bottom or on the top of the plate. The crystal measures 5 mm square. The probe handle, being made of a non-ferrous material, has no disturbing effect on the field.
A Hall probe should be calibrated against a known value of magnetic field strength. For a solenoid the Hall probe is placed in the center.
Tuesday, 15 August 2017
Transfromerless power supply planning
Hi folks here is the early stage of the transformerless power supply I am working on at the moment the values of the components may change as I get further into the build and have done some more testing. Once I am happy with the circuit I will release the schematic
Monday, 14 August 2017
time to say sorry
hi folks I would like to say sorry for the poor content over the last few days as you some of you know I have a very heavy work load at the moment and I am trying my best to keep daily post coming but I have not been happy with some of my posts of late but please bear with me as the quality will improve soon as things stand I am hoping to get back to normal by the end of the month.
I have been working a transformerless power supply which is slowly coming along I hope to have something to show you in the next few days I am planning to use it in a few upcoming projects.
I have been working a transformerless power supply which is slowly coming along I hope to have something to show you in the next few days I am planning to use it in a few upcoming projects.
Sunday, 13 August 2017
IRF520N
Fifth Generation HEXFETs from International Rectifier utilize advanced processing techniques to achieve extremely low on-resistance per silicon area. This benefit, combined with the fast switching speed and ruggedized device design that HEXFET Power MOSFETs are well known for, provides the designer with an extremely efficient and reliable device for use in a wide variety of applications.
The TO-220 package is universally preferred for all commercial-industrial applications at power dissipation levels to approximately 50 watts. The low thermal resistance and low package cost of the TO-220 contribute to its wide acceptance throughout the industry.
The TO-220 package is universally preferred for all commercial-industrial applications at power dissipation levels to approximately 50 watts. The low thermal resistance and low package cost of the TO-220 contribute to its wide acceptance throughout the industry.
Saturday, 12 August 2017
Dealing with set backs
sometimes things can go wrong it is important to figure out when dealing with electronics there will many a time you will see the magic smoke or see sparks this is all park of learning normally these will be caused but a misplaced wire or a bad connections when it happen I find it easier to stop look at your circuit and see if you can fine out what went wrong and not get downbeat about it just carry on until you get the results you want
Friday, 11 August 2017
Thursday, 10 August 2017
looking for Project sugguestions
Hi folks I am looking for project suggestions what would you like to see me make ?
if you have any ideas pop them I'm comments or twitter or facebook all projects considered and you will get a shoutout in the build post
if you have any ideas pop them I'm comments or twitter or facebook all projects considered and you will get a shoutout in the build post
Wednesday, 9 August 2017
556 timer
hi folks here is some information on the 556 timer
DESCRIPTION
NE556 dual monolithic circuit is a highly stable controller capable of producing accurate delays or oscillation. NE556 is the dual of NE555; timing is provided an external resistor and capacitor for each function. The two timers operate independently of each other, sharing only Vcc and GND. The circuits may be triggered and reset on falling wave forms. The output structures may sink or source 200mA.
FEATURES
*High current driver capability(=200mA) *Adjustable duty cycle *Timing from Sec to Hours *Temperature stability of 0.005%/ C *TTL compatible *Operates in both Astable and Monostable modes
DESCRIPTION
NE556 dual monolithic circuit is a highly stable controller capable of producing accurate delays or oscillation. NE556 is the dual of NE555; timing is provided an external resistor and capacitor for each function. The two timers operate independently of each other, sharing only Vcc and GND. The circuits may be triggered and reset on falling wave forms. The output structures may sink or source 200mA.
FEATURES
*High current driver capability(=200mA) *Adjustable duty cycle *Timing from Sec to Hours *Temperature stability of 0.005%/ C *TTL compatible *Operates in both Astable and Monostable modes
Tuesday, 8 August 2017
EL817 Optocoupler
Hi folks here is some information on the EL817 Optocoupler
Features:
• Current transfer ratio (CTR:MIN.50% at IF =5mA ,VCE =5V)
• High isolation voltage between input and output (Viso=5000 V rms )
• Compact dual-in-line package EL817*:1-channel type • Pb free
• UL approved (No. E214129) • VDE approved (No. 132249)
• SEMKO approved (No. 0143133/01-03)
• NEMKO approved (No. P00102385)
• DEMKO approved (No. 310352-04)
• FIMKO approved (No. FI 16763A2)
• CSA approved (No. 1143601)
• BSI approved (No. 8592 / 8593)
• Options available: - Leads with 0.4”(10.16mm) spacing (M Type) - Leads bends for surface mounting (S Type) - Tape and Reel of TypeⅠ for SMD(Add”-TA” Suffix) - Tape and Reel of TypeⅡ for SMD(Add”-TB” Suffix) - The tape is 16mm and is wound on a 33cm reel
• Computer terminals
• System appliances, measuring instruments
• Registers, copiers, automatic vending machines
• Cassette type recorder
• Electric home appliances, such as fan heaters, etc.
• Signal transmission between circuits of different potentials and impedances
Description
The EL817 series contains a infrared emitting diode optically coupled to a phototransistor. It is packaged in a 4-pin DIP package and available in wide-lead spacing and SMD option.Features:
• Current transfer ratio (CTR:MIN.50% at IF =5mA ,VCE =5V)
• High isolation voltage between input and output (Viso=5000 V rms )
• Compact dual-in-line package EL817*:1-channel type • Pb free
• UL approved (No. E214129) • VDE approved (No. 132249)
• SEMKO approved (No. 0143133/01-03)
• NEMKO approved (No. P00102385)
• DEMKO approved (No. 310352-04)
• FIMKO approved (No. FI 16763A2)
• CSA approved (No. 1143601)
• BSI approved (No. 8592 / 8593)
• Options available: - Leads with 0.4”(10.16mm) spacing (M Type) - Leads bends for surface mounting (S Type) - Tape and Reel of TypeⅠ for SMD(Add”-TA” Suffix) - Tape and Reel of TypeⅡ for SMD(Add”-TB” Suffix) - The tape is 16mm and is wound on a 33cm reel
Applications
• Computer terminals • System appliances, measuring instruments
• Registers, copiers, automatic vending machines
• Cassette type recorder
• Electric home appliances, such as fan heaters, etc.
• Signal transmission between circuits of different potentials and impedances
Monday, 7 August 2017
keeping the updates rolling out
Hi folks as I explained on the news post due to work commitments I've had to change the schedule but this is only a short term thing and once my workload drops down I will move back into a very similar posting schedule with a few changes that I am working on during this down time but there will still be daily post showing the little bits I'm working on for future projects . I am also planning on a video post which will be a different challenge for me . sorry for the lack of electronics and tech in this post there will be a far better post tomorrow . thank you for support dobby
Sunday, 6 August 2017
Project Day #14 light and sound kit
Hi folks just a quick kit build this week hope you enjoy
The kit
here is the kit I will be building today I picked this up from ebay a while back
Step 1
solder the resistors and diodes to the pcb
Step 2
solder the transistors,led and mic to the pcb
Step 3
solder the capacitors and LDR to the pcb
the finished board
here is the board finished
Thanks for taking the time to read this if you liked what you've read then please check back tomorrow after 17.00bst for another update thanks again. Dobby
Saturday, 5 August 2017
The News #14
Good afternoon folks and welcome to this weeks news update where I talk about how my weeks gone and upcoming projects and other things.
this weeks news is rather short as I have been rather busy with a work things which is really effecting the time I get to spend on the blog this will be soon change as my work load changes but due to a very heavy work load over the next few weeks I will be making a few changes over this time so starting from Monday I won't be posting the normal post but I'll still be posting daily post I hope you understand this.
in other news
I gotten some parts for upcoming projects that I have been planning I hope to share parts of the projects in the coming weeks some of this will be coding work for projects as well prototyping parts for them .
after this busy time I will be making a few changes I hope to have a video setup ready so that I can post videos on a regular basis if you have any suggestions please let me know
This weeks post
Thanks for taking the time to read this if you liked what you've read then please check back tomorrow after 17.00bst for another update thanks again Dobby
Friday, 4 August 2017
Flashback Friday #14 the Nes
Good afternoon folks and welcome to another Flashback Friday this weeks blast from the past is the Nes
Development
Following a series of arcade game successes in the early 1980s, Nintendo made plans to create a cartridge-based console called the Famicom, which is short for Family Computer. Masayuki Uemura designed the system. Original plans called for an advanced 16-bit system which would function as a full-fledged computer with a keyboard and floppy disk drive, but Nintendo president Hiroshi Yamauchi rejected this and instead decided to go for a cheaper, more conventional cartridge-based game console as he felt that features such as keyboards and disks were intimidating to non-technophiles. A test model was constructed in October 1982 to verify the functionality of the hardware, after which work began on programming tools. Because 65xx CPUs had not been manufactured or sold in Japan up to that time, no cross-development software was available and it had to be produced from scratch. Early Famicom games were written on a system that ran on an NEC PC-8001 computer and LEDs on a grid were used with a digitizer to design graphics as no software design tools for this purpose existed at that time.The code name for the project was "GameCom", but Masayuki Uemura's wife proposed the name "Famicom", arguing that "In Japan, 'pasokon' is used to mean a personal computer, but it is neither a home or personal computer. Perhaps we could say it is a family computer." Meanwhile, Hiroshi Yamauchi decided that the console should use a red and white theme after seeing a billboard for DX Antenna which used those colors.
During the creation of the Famicom, the ColecoVision, a video game console made by Coleco to compete against Atari's Atari 2600 Game system in The United States, was a huge influence. Takao Sawano, chief manager of the project, brought a Colecvovision home to his family, who were impressed by the systems capability to produce smooth graphics at the time, which contrasted with the flickering and slowdown commonly seen on Atari 2600 games. Uemura, head of Famicom development, stated that the ColecoVision set the bar that influenced how he would approach the creation of the Famicom.
Original plans called for the Famicom's cartridges to be the size of a cassette tape, but ultimately they ended up being twice as big. Careful design attention was paid to the cartridge connectors since loose and faulty connections often plagued arcade machines. As it necessitated taking 60 connection lines for the memory and expansion, Nintendo decided to produce their own connectors in-house rather than use ones from an outside supplier.
The controllers were hard-wired to the console with no connectors for cost reasons. The game pad controllers were more-or-less copied directly from the Game & Watch machines, although the Famicom design team originally wanted to use arcade-style joysticks, even taking apart ones from American game consoles to see how they worked. There were concerns regarding the durability of the joystick design and that children might step on joysticks left on the floor. Katsuyah Nakawaka attached a Game & Watch D-pad to the Famicom prototype and found that it was easy to use and caused no discomfort. Ultimately though, they installed a 15-pin expansion port on the front of the console so that an optional arcade-style joystick could be used.
Uemura added an eject lever to the cartridge slot which was not really necessary, but he felt that children could be entertained by pressing it. He also added a microphone to the second controller with the idea that it could be used to make players' voices sound through the TV speaker.
Release
The console was released on July 15, 1983 as the Family Computer (or Famicom for short) for ¥14,800 alongside three ports of Nintendo's successful arcade games Donkey Kong, Donkey Kong Jr. and Popeye. The Famicom was slow to gather momentum; a bad chip set caused the initial release of the system to crash. Following a product recall and a reissue with a new motherboard, the Famicom’s popularity soared, becoming the best-selling game console in Japan by the end of 1984.Encouraged by this success, Nintendo turned its attention to the North American market, entering into negotiations with Atari to release the Famicom under Atari’s name as the Nintendo Advanced Video Gaming System. The deal was set to be finalized and signed at the Summer Consumer Electronics Show in June 1983. However, Atari discovered at that show that its competitor Coleco was illegally demonstrating its Coleco Adam computer with Nintendo's Donkey Kong game. This violation of Atari's exclusive license with Nintendo to publish the game for its own computer systems delayed the implementation of Nintendo's game console marketing contract with Atari. Atari's CEO Ray Kassar was fired the next month, so the deal went nowhere, and Nintendo decided to market its system
Subsequent plans to market a Famicom console in North America featuring a keyboard, cassette data recorder, wireless joystick controller and a special BASIC cartridge under the name "Nintendo Advanced Video System" likewise never materialized.By the beginning of 1985, the Famicom had sold more than 2.5 million units in Japan and Nintendo soon announced plans to release it in North America as the Advanced Video Entertainment System (AVS) that same year. The American video game press was skeptical that the console could have any success in the region, with the March 1985 issue of Electronic Games magazine stating that "the videogame market in America has virtually disappeared" and that "this could be a miscalculation on Nintendo's part."
At June 1985's Consumer Electronics Show (CES), Nintendo unveiled the American version of its Famicom, with a new case redesigned by Lance Barr and featuring a "zero insertion force" cartridge slot.This is the system which would eventually be officially deployed as the Nintendo Entertainment System, or the colloquial "NES". Nintendo seeded these first systems to limited American test markets starting in New York City on October 18, 1985, and following up with a full-fledged North American release in February of the following year.The nationwide release was in September 1986. Nintendo released 17 launch titles: 10-Yard Fight, Baseball, Clu Clu Land, Duck Hunt, Excitebike, Golf, Gyromite, Hogan’s Alley, Ice Climber, Kung Fu, Pinball, Soccer, Stack-Up, Tennis, Wild Gunman, Wrecking Crew, and Super Mario Bros. Some varieties of these launch games contained Famicom chips with an adapter inside the cartridge so they would play on North American consoles, which is why the title screen of Gyromite has the Famicom title "Robot Gyro" and the title screen of Stack-Up has the Famicom title "Robot Block".
The system's launch represented not only a new product, but also a reframing of the severely damaged home video game market. The video game market crash of 1983 had occurred in large part due to a lack of consumer and retailer confidence in video games, which had been partially due to confusion and misrepresentation in video game marketing. Prior to the NES, the packaging of many video games presented bombastic artwork which exaggerated the graphics of the actual game. In terms of product identity, a single game such as Pac-Man would appear in many versions on many different game consoles and computers, with large variations in graphics, sound, and general quality between the versions. In stark contrast, Nintendo's marketing strategy aimed to regain consumer and retailer confidence by delivering a singular platform whose technology was not in need of exaggeration and whose qualities were clearly defined.
To differentiate Nintendo's new home platform from the perception of a troubled and shallow video game market, the company freshened its product nomenclature and established a strict product approval and licensing policy. The overall system was referred to as an "Entertainment System" instead of a "video game system", which was centered upon a machine called a "Control Deck" instead of a "console", and which featured software cartridges called "Game Paks" instead of "video games". To deter production of games which had not been licensed by Nintendo, and to prevent copying, the 10NES lockout chip system acted as a lock-and-key coupling of each Game Pak and Control Deck. The packaging of the launch lineup of NES games bore pictures of close representations of actual onscreen graphics. To reduce consumer confusion, symbols on the games' packaging clearly indicated the genre of the game. A 'seal of quality' was printed on all licensed game and accessory packaging. The initial seal stated, "This seal is your assurance that Nintendo has approved and guaranteed the quality of this product". This text was later changed to "Official Nintendo Seal of Quality".
Unlike with the Famicom, Nintendo of America marketed the console primarily to children, instituting a strict policy of censoring profanity, sexual, religious, or political content. The most famous example was Lucasfilm's attempts to port the comedy-horror game Maniac Mansion to the NES, which Nintendo insisted be considerably watered down. Nintendo of America continued their censorship policy until 1994 with the advent of the Entertainment Software Rating Board system.
The optional Robotic Operating Buddy, or R.O.B., was part of a marketing plan to portray the NES's technology as being novel and sophisticated when compared to previous game consoles, and to portray its position as being within reach of the better established toy market. While at first, the American public exhibited limited excitement for the console itself, peripherals such as the light gun and R.O.B. attracted extensive attention
In Europe, Australia and Canada, the system was released to two separate marketing regions. The first consisted of mainland Europe (excluding Italy) where distribution was handled by a number of different companies, with Nintendo responsible for most cartridge releases. Most of this region saw a 1986 release. The following year Mattel handled distribution for the second region, consisting of the United Kingdom, Ireland, Canada, Italy, Australia and New Zealand. Not until the 1990s did Nintendo's newly created European branch direct distribution throughout Europe
For its complete North American release, the Nintendo Entertainment System was progressively released over the ensuing years in four different bundles: the Deluxe Set, the Control Deck, the Action Set and the Power Set. The Deluxe Set, retailing at US$199.99 (equivalent to $481 in 2016), included R.O.B., a light gun called the NES Zapper, two controllers, and two Game Paks: Gyromite, and Duck Hunt. The Basic Set retailed at US$89.99 with no game, and US$99.99 bundled with Super Mario Bros. The Action Set, retailing in November, 1988 for US$149.99, came with the Control Deck, two game controllers, an NES Zapper, and a dual Game Pak containing both Super Mario Bros. and Duck Hunt In 1989, the Power Set included the console, two game controllers, an NES Zapper, a Power Pad, and a triple Game Pak containing Super Mario Bros, Duck Hunt, and World Class Track Meet. In 1990, a Sports Set bundle was released, including the console, an NES Satellite infrared wireless multitap adapter, four game controllers, and a dual Game Pak containing Super Spike V'Ball and Nintendo World Cup.Two more bundle packages were later released using the original model NES console. The Challenge Set of 1992 included the console, two controllers, and a Super Mario Bros. 3 Game Pak for a retail price of US$89.99. The Basic Set, first released in 1987, was repackaged for a retail US$89.99. It included only the console and two controllers, and no longer was bundled with a cartridge Instead, it contained a book called the Official Nintendo Player's Guide, which contained detailed information for every NES game made up to that point.To differentiate Nintendo's new home platform from the perception of a troubled and shallow video game market, the company freshened its product nomenclature and established a strict product approval and licensing policy. The overall system was referred to as an "Entertainment System" instead of a "video game system", which was centered upon a machine called a "Control Deck" instead of a "console", and which featured software cartridges called "Game Paks" instead of "video games". To deter production of games which had not been licensed by Nintendo, and to prevent copying, the 10NES lockout chip system acted as a lock-and-key coupling of each Game Pak and Control Deck. The packaging of the launch lineup of NES games bore pictures of close representations of actual onscreen graphics. To reduce consumer confusion, symbols on the games' packaging clearly indicated the genre of the game. A 'seal of quality' was printed on all licensed game and accessory packaging. The initial seal stated, "This seal is your assurance that Nintendo has approved and guaranteed the quality of this product". This text was later changed to "Official Nintendo Seal of Quality".
Unlike with the Famicom, Nintendo of America marketed the console primarily to children, instituting a strict policy of censoring profanity, sexual, religious, or political content. The most famous example was Lucasfilm's attempts to port the comedy-horror game Maniac Mansion to the NES, which Nintendo insisted be considerably watered down. Nintendo of America continued their censorship policy until 1994 with the advent of the Entertainment Software Rating Board system.
The optional Robotic Operating Buddy, or R.O.B., was part of a marketing plan to portray the NES's technology as being novel and sophisticated when compared to previous game consoles, and to portray its position as being within reach of the better established toy market. While at first, the American public exhibited limited excitement for the console itself, peripherals such as the light gun and R.O.B. attracted extensive attention
In Europe, Australia and Canada, the system was released to two separate marketing regions. The first consisted of mainland Europe (excluding Italy) where distribution was handled by a number of different companies, with Nintendo responsible for most cartridge releases. Most of this region saw a 1986 release. The following year Mattel handled distribution for the second region, consisting of the United Kingdom, Ireland, Canada, Italy, Australia and New Zealand. Not until the 1990s did Nintendo's newly created European branch direct distribution throughout Europe
Finally, the console was redesigned for both the North American and Japanese markets as part of the final Nintendo-released bundle package. The package included the new style NES-101 console, and one redesigned "dogbone" game controller. Released in October 1993 in North America, this final bundle retailed for US$49.99 and remained in production until the discontinuation of the NES in 1995
My experience
Thanks for taking the time to read this if you liked what you've read then please check back tomorrow after 17.00bst for another update thanks again. Dobby
Subscribe to:
Posts (Atom)
dobby repairs
Hi everyone I know that I haven't posted on here for quite some time I probably won't be posting on here again for a while but I wa...
-
welcome and happy Project day this week it is a Prototyping board for bread boarding this was just a quick little project to give me an eas...
-
hi guys I just wanted to share this Electrical Contact Cleaner I got from ebay to it works really well and does the job
-
Hi everyone I know that I haven't posted on here for quite some time I probably won't be posting on here again for a while but I wa...