Tuesday 16 July 2013

MINING ALARAMING INCIDENTS IN DATA STREAMS

Clickhere  to download the file.



MINING ALARAMING INCIDENTS IN DATA STREAMS .ppt

 

HACKING A NEW PERSPECTIVE



Hacking (English verb to hack, singular noun a hack) refers to the re-configuring or re-programming of a system to function in ways not facilitated by the owner, administrator, or designer. The term(s) have several related meanings in the technology and computer science fields, wherein a "hack" may refer to a clever or quick fix to a computer program problem, or to what may be perceived to be a clumsy or inelegant (but usually relatively quick) solution to a problem, such as a "kludge".

The terms "hack" and "hacking" are also used to refer to a modification of a program or device to give the user access to features that were otherwise unavailable, such as by circuit bending. It is from this usage that the term "hacking" is often incorrectly used to refer to more nefarious criminal uses such as identity theft, credit card fraud or other actions categorized ascomputer crime; there being a distinction between security breaking and hacking, a better term for security breaking would be "cracking





Clickhere  to download the file.


HACKING A NEW PERSPECTIVE .ppt

 

POLYMER LIGHT EMITTING DIODES (PLEDs)

Clickhere  to download the file.



POLYMER LIGHT EMITTING DIODES (PLEDs)  .ppt

 

An Overview OF .Net Framework

Clickhere  to download the file.

An Overview OF .Net Framework .ppt

View the original article here

Monday 15 July 2013

SIGNALING SYSTEM NO-7 SS7

Clickhere  to download the file.



SIGNALING SYSTEM NO-7 SS7 .ppt

 

MOBILE AD HOC NETWORK

Clickhere  to download the file.



MOBILE AD HOC NETWORK .ppt

 

Speech Recognition using DWT

Clickhere  to download the file.



Speech Recognition using DWT .ppt

 

TRANSMISSION PLANNING AND ITS IMPROVEMENT

Clickhere  to download the file.



TRANSMISSION PLANNING AND ITS IMPROVEMENT .ppt

 

Military Robots

Clickhere  to download the file.



Military Robots .ppt

 

VLSI For COMMUNICATIONS

Clickhere  to download the file.



VLSI For COMMUNICATIONS .ppt

 

Sunday 14 July 2013

SECOND HARMONIC GENERATION IN MATERIALS

Clickhere  to download the file.





SECOND HARMONIC GENERATION IN MATERIALS .ppt



 

DISTRIBUTED COMPUTING

Clickhere  to download the file.



DISTRIBUTED COMPUTING .ppt

 

VOICE OVER INTERNET PROTOCOL .ppt

Clickhere  to download the file.



VOICE OVER INTERNET PROTOCOL.ppt

 

Integrated Service Digital Network

Clickhere  to download the file.



Integrated Service Digital Network.ppt

 

Virtual LAN ( VLAN )

Clickhere  to download the file.



Virtual LAN ( VLAN ) .ppt

 

ELECTROSTATIC PRECIPITATORS

Clickhere  to download the file.



ELECTROSTATIC PRECIPITATORS .ppt

 

Wi Fi Protected Access

Clickhere  to download the file.



Wi Fi Protected Access.ppt

 

Intrusion Detection and Isolation Protocol

Clickhere  to download the file.



Intrusion Detection and Isolation Protocol.ppt

 

Saturday 13 July 2013

SDMA Techniques For Wireless ATM

Clickhere  to download the file.



SDMA Techniques For Wireless ATM.ppt

 

Content Management Server

Clickhere  to download the file.



Content Management Server.ppt

 

Internet Protocol Version-6.0

Clickhere  to download the file.



Internet Protocol Version-6.0 .ppt

 

Smart Card

Clickhere  to download the file.



Smart Card .ppt

 

COMPRESSION TECHNIQUE

Clickhere  to download the file.



COMPRESSION TECHNIQUE .ppt

 

ROLE OF BIOMETRICS IN SECURITY TECHNOLOGY

Clickhere  to download the file.



ROLE OF BIOMETRICS IN SECURITY TECHNOLOGY .ppt

 

VOIP AND DESCRIPTION OF MGCP H.323

Clickhere  to download the file.



VOIP AND DESCRIPTION OF MGCP H.323.ppt

 

Report and Abstract for Plan9 Operating System

Download your Seminar Reports for Plan9 Operating System


An operating system is a collection of programs and procedures that help the user to work with the computer efficiently. To enable an efficient and productive use with the system of any sort, the prorating system used must be so designed such that it gives all the necessary facilities to work for the users. Many operating fundamentals, , review, Presentations, Seminar Topics,Free Reports, PPT, Presentations, Documentationsystems have been designed over the years for different classes of users. But most of them were designed for single user workstations. By the mid 1980's, the trend in computing was away from large centralized time-shared computers towards networks of smaller, personal machines, typically UNIX `workstations’. People had grown weary of overloaded, bureaucratic timesharing machines and were eager to move to small, self-maintained systems, even if that meant a net loss in computing power. As microcomputers became faster, even that loss was recovered, and this style of computing remains popular today.
Plan9 OperatingSystem
In the rush to personal workstations, though, some of their weaknesses were overlooked. First, the operating system they run, UNIX, is itself an old timesharing system and has had trouble adapting to ideas born after it. Graphics and networking were  added to UNIX well into its lifetime and remain poorly integrated and difficult to administer. Plan 9 began in the late 1980's as an attempt to have it both ways: to build a system that was centrally administered and cost-effective using cheap modern microcomputers as its computing elements. The idea was to build a time-sharing system out of workstations, but in a novel way. Different computers would handle different tasks: small, cheap machines in people’s offices would serve as terminals providing access to large, central, shared resources such as computing servers and file servers. For the central machines, the coming wave of shared-memory multiprocessors seemed obvious candidates.


The problems with UNIX were too deep to fix, but some of its ideas could be brought along. The best was its use of the file system to coordinate naming of and access to resources, even those, such as devices, not traditionally treated as files. Plan 9 is  designed around this basic principle that all resources appear as files in a hierarchial file system, which is unique to each process. As for the design of any operating system various things such as the design of the file and directory system implementation and the various interfaces are important. Plan 9 has all these well-designed features. All these help to provide a strong base for the operating system that could be well suited in a  distributed and networked environment.


The different features of Plan 9 operating system are:
?    The dump file system makes a daily snapshot of the file store available to the
users.
?    Unicode character set supported throughout the system.
?    Advanced kernel synchronization facilities for parallel processing.
?    Security- there is no super-user or root user and the passwords are never sent over
the network.


Hardware Requirements


A large Plan 9 installation has a number of computers networked together, each providing a particular class of service. Shared multiprocessor servers provide computing cycles; other large machines offer file storage. Lower bandwidth networks such as Ethernet or ISDN connect these servers to office and home resident workstations or PC’s, called terminals in Plan 9 terminology. Figure below shows the CPU servers and file servers share fast local area networks, while terminals use slower wider –area networks such as Ethernet, Data kit, or telephone lines to connect to them. Gateway machines, which are just CPU servers connected to multiple networks, allow machines on one network to see another. The modern style of computing offers a dedicated PC or workstation. It runs on multiple hardware platforms and is highly suited to building large distributed systems. A typical Plan9 installation would comprise a file server, some CPU servers and a large number of terminals. Plan9 is suitable for small
research groups to large organizations. The low system management overhead makes it particularly suitable for classroom teaching applications.
Plan9 Operating System


CPU servers and file servers share fast local area networks, while terminals use slower wider –area networks such as Ethernet, Data kit, or telephone lines to connect to them. Gateway machines, which are just CPU servers connected to multiple networks, allow fundamentals, , review, Presentations, Seminar Topics,Free Reports, PPT, Presentations, Documentationmachines on one network to see another. The modern style of computing offers a dedicated PC or workstation. It runs on multiple hardware platforms and is highly suited to building large distributed systems. A typical Plan9 installation would comprise a file server, some CPU servers and a large number of terminals. Plan9 is suitable for small research groups to large organizations. The low system management overhead makes it particularly suitable for classroom teaching applications.


Download your Seminar Reports for Plan9 Operating System

Friday 12 July 2013

Report and Abstract for Braingate System

Download your Seminar Reports for Braingate System


The mind-to-movement system that allows a quadriplegic man to control a computer using only his thoughts is a scientific milestone. It was reached, in large part, through the fundamentals, , review, Presentations, Seminar Topics,Free Reports, PPT, Presentations, Documentationbrain gate system. This system has become a boon to the paralyzed. The Brain Gate System is based on Cyber kinetics platform technology to sense,transmit,analyze and apply the language of neurons. The principle of operation behind the Brain Gate System is that with intact brain function, brain signals are generated even though they are not sent to the arms, hands and legs.The signals are interpreted and translated into cursor movements, offering the user an alternate Brain Gate pathway to control a computer with thought,just as individuals who have the ability to move their hands use a mouse.


The ‘Brain Gate’ contains tiny spikes that will extend down about one millimetre into the brain after being implanted beneath the skull,monitoring the activity from a small group of neurons.It will now be possible for a patient with spinal cord injury to produce brain signals that relay the intention of moving the paralyzed limbs,as signals to an implanted sensor,which is then output as electronic impulses. These impulses enable the user to operate mechanical devices with the help of a computer cursor. Matthew Nagle,a 25-year-old Massachusetts man with a severe spinal cord injury,has been paralyzed from the neck down since 2001.After taking part in a clinical trial of this system,he has opened e-mail,switched TV channels,turned on lights.He even moved a robotic hand from his wheelchair. This marks the first time that neural movement signals have been recorded and decoded in a human with spinal cord injury.The system is also the first to allow a human to control his surrounding environment using his mind.m_jmn40050f1


How does the brain control motor function?


The brain is “hardwired” with connections, which are made by billions of neurons that make electricity whenever they are stimulated. The electrical patterns are called brain waves. Neurons act like the wires and gates in a computer, gathering and transmitting electrochemical signals over distances as far as several feet. The brain encodes information not by relying on single neurons, but by spreading it across large populations of neurons, and by rapidly adapting to new circumstances.


Motor neurons carry signals from the central nervous system to the muscles, skin and glands of the body, while sensory neurons carry signals from those outer parts of the body to the central nervous system. Receptors sense things like chemicals, light, and sound and encode this information into electrochemical signals transmitted by the sensory neurons. And interneurons tie everything together by connecting the various neurons within the brain and spinal cord. The part of the brain that controls motor skills is located at the ear of the frontal lobe.


How does this communication happen? Muscles in the body’s limbs contain embedded sensors called muscle spindles that measure the length and speed of the muscles as they stretch and contract as you move. Other sensors in the skin respond to stretching and pressure. Even if paralysis or disease damages the part of the brain that processes movement, the brain still makes neural signals. They’re just not being sent to the arms, hands and legs.


à A technique called neurofeedback uses connecting sensors on the scalp to translate brain waves into information a person can learn from. The sensors register different frequencies of the signals produced in the brain. These changes in brain wave patterns indicate whether someone is concentrating or suppressing his impulses, or whether he is relaxed or tense.


NEUROPROSTHETIC DEVICE:


             A neuroprosthetic device known as Braingate converts brain activity into computer  commands. A sensor is implanted on the brain, and electrodes are hooked up to wires that travel to a pedestal on the scalp. From there, a fiber optic cable carries the brain activity data to a nearby computer.


PRINCIPLE:


“The principle of operation of the BrainGate Neural Interface System is that with intact fundamentals, , review, Presentations, Seminar Topics,Free Reports, PPT, Presentations, Documentationbrain function, neural signals are generated even though they are not sent to the arms, hands and legs. These signals are interpreted by the System and a cursor is shown to the user on a computer screen that provides an alternate “BrainGate pathway”. The user can use that cursor to control the computer, just as a mouse is used.”


Download your Seminar Reports for Braingate System

Report and Abstract for Cryogenic Engine

Download your Seminar Reports for Cryogenic Engine

Today’s Modern Society and its financial system are characteristically dependent on the measure of mobility of people and cargo globally and cost-effectively.Globally demand for air transport currently increased in the past decade for its reliability,speed and efficiency.But the price is Emission of harmful gases to the Atmosphere.Despite of the fundamentals, , review, Presentations, Seminar Topics,Free Reports, PPT, Presentations, Documentationwarnings of global summits,and consequent measures to levelout environmental pollution,fossil fuels will eventually run out in the next 50 years.Due to the projected growth in aviation,improvements in conventional aircraft design,engine technology and efficiency of the mobility system must be set to avoid the effects of increased emissions.In search for an alternate fuel,liquid hydrogen(LH2) currently used in rockets, is ideal for aircrafts.LH2 production from water by electrolysis method,upon combustion by products are water and small amounts of nitrogen oxides.


First Aircraft of 35 partners from 11 European nations,is a comprehensive system analysis of LH2 fuelled aircraft CRYOPLANE.The project evaluates technical, economic feasibility,safety,environmental compatibility of LH2 as an aviation fuel.
The goals are delineation of aircraft configurations for all categories of commercial aircrafts. Performance, fuel efficiency quantifications. Design Analysis of LH2 fuel system and its potential synergies with other aircraft systems. Definition of engine concept with emphasis on minimizing nitrogen oxide emissions. Study on Aircraft-specific safety aspects and a hydrogen-specific fire protection system.
Finally, CRYOPLANE project must indicate possible strategies for smooth transition to the LH2 fuel. Airport infrastructure for fuel production and distribution along with ground and flight operations will be analysed.


DEVELOPMENT OF CRYOGENIC FUEL AIRCRAFT


In mid-1970-s of previous century energy strategic dominated in the USSR according to which all atomic energy was supposed to be utilized first while oil and gas should have been considered of minor importance in view of small resources as they erroneously believed at that time.


Realization of Hydrogen Energy Program started. Tupolev’s specialists were involved in the Program. As it used to happen many times in the background of our company – Alexey Tupolev took a courageous decision – to build “Hydrogen” aircraft. Such aircraft was built and successfully tested without any serious incidents. It was preceded by a long-term Program of bench and ground tests intended for testing functioning of new systems (such systems were more than 30 on the aircraft) and mainly for providing safe operation.


Unfortunately mentioned above energy strategy turned to be not very correct. Atomic energy has not become dominating. It was natural gas that turned to be of paramount importance in the Energetic Program of our country. The content of natural gas exceeds 50% of energy balance. That’s why our flying laboratory having status of experimental TU-155 a/c was modified to use not only liquid hydrogen but also to use Liquefied Natural Gas (LNG). This is how the first in the world Cryogenic Aircraft was built.


2


Remarkable properties of liquid hydrogen as aviation fuel and first of all its high ecological cleanliness, high heat of combustion and high cooling capacity attracted attention of aviation specialists to this type of fuel. Liquid hydrogen allows to improve aircraft performance significantly, to build aircraft operating at speeds of M>6. Therefore our activities on liquid hydrogen served as a scientific and technological work done which will be used in near-term outlook. However extremely high price of liquid hydrogen makes its commercial use impossible for a long time.



If to speak about near future tomorrow task is to introduce LNG as aviation fuel which was reflected in “Program on development of Russian civil aviation for the period from 2002 to 2010 and for the period till 2015”.Oil shortage is growing. During previous 25 years specific weight of oil in worlds energy balance decreased by more than 10%.


Currently price of kerosene is 8000 rubles per tone, LNG price is 3000 rubles per tone. fundamentals, , review, Presentations, Seminar Topics,Free Reports, PPT, Presentations, DocumentationBenefit makes 5000 rubles per each tone of replaced kerosene. The benefit is likely to grow constantly according to opinion of many specialists.


Recently some special scientific “explosion” happened in the world and especially in Russia that provoked a vision that traditional and non-traditional resources of natural gas can be increased by an order magnitude greater and exceed total amount of traditional fossil fuel on earth.


Download your Seminar Reports for Cryogenic Engine

Report and Abstract for Bicmos Technology

The history of semiconductor devices starts in 1930’s when Lienfed and Heil first fundamentals, , review, Presentations, Seminar Topics,Free Reports, PPT, Presentations, Documentationproposed the mosfet. However it took 30 years before this idea was applied to functioning devices to be used in practical applications, and up to the late 1980 this trend took a turn when MOS technology caught up and there was a cross over between bipolar and MOS share.CMOS was finding more wide spread use due to its low power dissipation, high packing density and simple design, such that by 1990 CMOS covered more than 90% of total MOS scale.


In 1983 bipolar compatible process based on CMOS technology was developed and BiCMOS technology with both the MOS and bipolar device fabricated on the same chip was developed and studied. The objective of the BiCMOS is to combine bipolar and CMOS so as to exploit the advantages of both at the circuit and system levels. Since 1985, the state-of-the-art bipolar CMOS structures have been converging. Today BiCMOS has become one of the dominant technologies used for high speed, low power and highly functional VLSI circuits especially when the BiCMOS process has been enhanced and integrated in to the CMOS process without any additional steps. Because the    process step required for both CMOS and bipolar are similar, these steps cane be shared for both of them.


Bicmos Technology


The concept of system-on-chip (SOC) has evolved as the number of gates available to a designer has increased and as CMOS technology has migrated from a minimum feature size of several microns to close to 0.1 µm. Over the last decade, the integration of analog circuit blocks is an increasingly common feature of SOC development, motivated by the desire to shrink the number of chips and passives on a PC board. This, in turn, reduces system size and cost and improves reliability by requiring fewer components to be mounted on a PC board. Power dissipation of the system also improves with the elimination of the chip input-output (I/O) interconnect blocks. Superior matching and control of integrated components also allows for new circuit architectures to be used that cannot be attempted in multi-chip architectures. Driving PC board traces consume significant power, both in overcoming the larger capacitances on the PC board and through larger signal swings to overcome signal cross talk and noise on the PC board. Large-scale microcomputer systems with integrated peripherals, the complete digital processor of cellular phone, and the switching system for a wire-line data-communication system are some of the many applications of digital SOC systems.


Examples of analog or mixed-signal SOC devices include analog modems; broadband wired digital communication chips, such as DSL and cable modems; Wireless telephone chips that combine voice band codes with base band modulation and demodulation function; and ICs that function as the complete read channel for disc drives. The analog section of these chips includes wideband amplifiers, filters, phase locked loops, analog-to-digital converters, digital-to-analog converters, operational amplifiers, current references, and voltage references. Many of these systems take advantage of the digital processors in an SOC chip to auto-calibrate the analog section of the chip, including canceling de offsets and reducing linearity errors within data converters. Digital processors also allow tuning of analog blocks, such as centering filter-cutoff frequencies. Built-in self-test functions of the analog block are also possible through the use of on-chip digital processors.


Analog or mixed-signal SOC integration is inappropriate for designs that will allow low production volume and low margins. In this case, the nonrecurring engineering costs of designing the SOC chip and its mask set will far exceed the design cost for a system with standard programmable digital parts, standard analog and RF functional blocks, and discrete components. Noise issues from digital electronics can also limit the practicality of forming an SOC with high-precision analog or RF circuits. A system that requires power-supply voltages greater than 3.6 V in its analog or RF stages is also an unattractive candidate for an SOC because additional process modifications would be required for the silicon devices to work above the standard printed circuit board interface voltage of 3.3 V+- 10%.


Before a high-performance analog system can be integrated on a digital chip, the analog circuit blocks must have available critical passive components, such as resistors and capacitors. Digital blocks, in contrast, require only n-channel metal-oxide semiconductor (NMOS) and p-channel metal-oxide semiconductor (PMOS) transistors. Added process steps may be required to achieve characteristics for resistors and capacitors suitable for high-performance analog circuits. These steps create linear capacitors with low levels of parasitic capacitance coupling to other parts of the IC, such as the substrate. Though additional process steps may be needed for the resistors, it may be possible to alternatively use the diffusions steps, such as the N and P implants that make up the drains and sources of the MOS devices. The shortcomings of these elements as resistors, as can the poly silicon gate used as part of the CMOS devices. The shortcomings of these elements as resistors, beyond their high parasitic capacitances, are the resistors, beyond their high parasitic capacitances, are the resistor’s high temperature and voltage coefficients and the limited control of the absolute value of the resistor.


Even with these additional process steps, analog engineers must cope with small capacitor sizes (50-pf maximum) and variations in the absolute value of both the resistors and capacitors (with no tracking between the resistors and capacitors that could stabilize the resistor-capacitor-capacitor time constraint (RC) product). Analog designers have developed novel circuits, such as switched capacitor circuits, to surmount these obstacles. Indeed, CMOS enable the switched-capacitor circuit.


Beyond component consideration, circuit layout must be done carefully to prevent digital switching noise from degrading circuit performance. For example, power supply routing must be carefully managed in analog circuits. The quality of computer models for active and passive components is also a thorny issue. Models that are sufficiently accurate to estimate the speed performance, of digital gates are not accurate enough to predict gain or high-frequency response. Drain conductance, for instance, is a key analog design parameter, though it does not affect digital gate speed. Thus, heightened attention to the modeling of active, passive, and parasitic components is needed for the circuit to perform as expected on its first pass through the silicon manufacturer.          The introduction of RF circuits to an SOC creates numerous problems. The circuits are sensitive to noise on the   power supply and substrate, owing to the low-level signals at which they operated and the likelihood of correlated digital noise mixing with the nonlinear RF components giving rise undesired spurs in the RF circuit’s outputs. Moreover, RF circuits require their own set of active and passive components. Foremost is the need for a high-speed, low-noise bipolar transistor. CMOS devices have yet to demonstrate that they can be used in high volume RF systems with challenging specification such as those found in cellular phones while concurrently offering competitive power consumption and die area. If the RF SOC is to be competitive with a multiple-chip or discrete-components system, the bipolar devices available in the process that are intended for use in an RF SOC application must be state-of-the-art.


Inductors play a critical role in RF circuits, especially in low-noise amplifiers, mixers, filters, power amplifiers, and oscillators. The inductor enables the tuned narrow-band inductance-capacitance (LC) circuit. These circuits not only filter undesired signal, but also allow for the design of circuits that can operate over a narrow band at much higher frequencies than would be possible for a broadband design without inductors. Oscillators, with the very low-phase noise required by high-performance RF systems, also need LC tank circuits. For this application, high (larger that ten) Q-factors are required. Here, inductors are used for impedance matching, bandwidth extension through peaking of the response, degeneration (extension of the linear range of operation at the cost of absolute gain) without the noise penalty of resistors, and as current sources. A current source allows for more headroom for the active devices than an active current source or a resistor. This is pivotal for battery-powered devices or designs that incorporate bipolar or MOS field-effect transistors (MOSFETs) with low breakdown voltages. Inductors must be available in an IC process that is to be used to form an RF SOC, given their role in RF circuits.


RF circuits place additional requirements on the on-chip resistors and capacitors. Resistors must be very linear, have minimal temperature coefficients, have better control of absolute accuracy, and demonstrate very low parasitic coupling to the substrate. Capacitors need high Q in RF systems. Absolute values of the capacitors may need to be much larger than those in an analog circuit when the capacitors are used for impedance matching, bypassing, or phase-locked loop (PLL) filter applications.


Last, a voltage-controlled capacitor (varactor) is required to make RF voltage -controlled oscillators with low-phase noise. Varactors are paired with inductors to form the tuned circuit for the oscillator. A dc voltage coupled onto the varactor tunes the oscillator frequency. Varactors may be formed from reverse-biased junctions in a process. While the grading coefficient of the PN junction is not optimal for a large variation of capacitance with applied voltage, as in a discrete varactor, the junctions are still useable. PN junctions available in an IC process include the junctions of the PMOS (in the n-well fundamentals, , review, Presentations, Seminar Topics,Free Reports, PPT, Presentations, Documentationprocess) and NMOS (in the p-well process) drains, as well as the bipolar BE or BC junctions. MOSFETs can also be used as varactors since their capacitance changes as the gate voltage changes.  Limitations of on-chip varactors include poor linearity across the tuning range, limited tuning range, and low Q. These limitations can affect circuit performance. In a PLL, for example, low varactor Q can cause high-varactor tuning voltage and the varactor capacitance can change PLL loop dynamics over the range of frequencies to which the PLL locks. A tight varactor tuning range and a large variation in the absolute capacitance value of the varactor will limit the PLL lock range.


Download your Seminar Reports for Bicmos Technology

Report and Abstract for Wind Power Generation

Download your Seminar Reports for Wind Power Generation


Abstract:


This paper deals with wind power generation and the problems that arise in generation. As energy crisis is very high in case of developing countries like India, there came urgent need to look for other sources of energy that are clean and pollution free as conventional sources cause much fundamentals, , review, Presentations, Seminar Topics,Free Reports, PPT, Presentations, Documentationpollution. This paved path for non-conventional sources. Of all the renewable energy sources; the one that has matured to the level of being a utility generation source is wind energy .It is estimated that wind potential is 1.6*10 7MW which is same as world energy requirement. But the only problem is that wind speed is highly fluctuating. So many problems arise during power generation. So we mainly concentrate on the problems occurred during generation and how they can be rectified. The problems faced are due to local impacts and system impacts. Local impacts deal with the impacts that occur in the vicinity of the wind turbine or wind farm. System impacts are the impacts that affect the behavior of the system as a whole. Using modern power electronics and special type of wind turbines that suit to the conditions can solve local impacts. Designing turbines to withstand voltage variations of certain magnitudes can rectify system impacts to some extent. Controlling the rotor speed by gear mechanism can rectify problems due to high wind or computer aided techniques.


While fossils fuels will be the main fuels for the thermal power there is a fear that they will get exhausted eventually in next century therefore many countries are trying systems based on non-conventional and renewable sources. These are Solar, Wind, Sea, Geothermal and Biomass.  Because if we take solar power on earth it is 10 6watts.The total world demand is 10 13 watts, If we utilize 5% of the solar energy, it will be 50 times what that world require. If we consider the wind potential it is estimated to 1.6*10 7M.W, which is same as world energy consumption. So the development of non-conventional energy source is very; economical. While fossils fuels will be the main fuels for the thermal power there is a fear that they will get exhausted eventually in next century therefore many countries are trying systems based.

It is available through out the day unlike solar energy.After solar energy it is the second largest source of non-conventional source of energyIn India during the mid summer due lack of hydel power generation which is one of the main source of energy there is desperate need for energy. This can be meet to some extent by wind energy as there are very high winds during this period
GraphBy using photo voltaics, the power generated is dc .So it must be converted to ac to feed it to grid. But by using wind energy we can directly produce ac.In coastal areas, the cost of power generation from wind has become lower than diesel power and compared to thermal power.

From the study of wind distribution, it is estimated that about 27% of the land surface is exposed to an annual wind speed higher than 18.36kmph at 10m above the surface.


ORIGIN OF WIND


The earth is formed of highly varied surfaces and when solar radiations reach the earth, it creates temperature, density and pressure differences. This causes the development of the wind.


GENERATION OF POWER FROM WIND


The working principle of a wind turbine encompasses two conversion processes, which are carried out by its components, the rotor that extracts kinetic energy from the wind and converts it


GENERATION OF POWER FROM WIND


into a generator torque and the generator that converts this torque into electric power and feeds it into the grid.


GENERATING SYSTEM


A wind turbine is a complex system in which knowledge from the areas of the aerodynamics and mechanical, electrical and control engineering is applied


For the generating system, nearly all wind turbines currently installed use either one of the following systems.


1. Squirrel cage induction generator


2. Doubly fed induction generator


3. Direct drive synchronous generator


In which first one is a fixed speed or constant speed one while others are variable speed turbine


1. SQUIRREL CAGE INDUCTION GENERATOR


  SQUIRREL CAGE INDUCTION GENERATOR

It is the oldest one.It consists of a conventional, directly grid coupled squirrel cage induction generator.The slip and the rotor speed varies with the amount of power generatedIts draw back is it always consumes reactive power, which is undesirable in most of the cases, particularly in the case of large turbines and weak grid.It can be always be partly or fully compensated by capacitors in order to achieve a power factor close to one.

2. DOUBLY FED INDUCTION GENERATORDOUBLY FED INDUCTION GENERATOR

It is a variable speed turbineIn this case a back-to-back voltage source converter feeds the three-phase rotor winding. So the mechanical and electrical rotor frequencies are decoupled and the electrical stator and rotor frequency can match, independently of the mechanical rotor speed.

3. DIRECT DRIVE SYNCHRONOUS GENERATOR


 DIRECT DRIVE SYNCHRONOUS GENERATOR

In this case generator is completely decoupled from the grid by a power electronics converter connected to the stator winding.The direct drive generator is excited using an excitation winding or permanent magnets.

But directly grid coupled synchronous generators are not used in wind turbines due to unfavorable dynamic characteristics. When used in combination with a fluctuating prime movers cause high structural loads and a risk of instability during wind gusts which is also a problem.


IMPACTS


Impacts can be classified mainly into two types.


1. Local impacts.


2. Systems impacts.


1. LOCAL IMPACTS.


Local impacts of wind power are impacts that occur in the (electrical) vicinity of a wind turbine or wind farm and can be attributed to a specific turbine or farm. Local impacts occur at each turbine are largely independent of the over all wind power penetration level in the system as a whole.


Local impacts are

Branch flows and node voltages.Protection schemes, fault currents and switch gear ratings.Harmonic distortion.Flicker

BRANCH FLOWS AND NODE VOLTAGES.


The way in which wind turbines locally affect the node voltages depends on speed of the turbine used .the squirrel cage induction generator in constant speed cannot affect node voltages by adopting the reactive power exchange with the grid. For this additional equipment for generating controllable amounts of reactive power would be necessary. On the other hand variable speed turbines have, at least theoretically, the capability of varying reactive power to affect their terminal voltage, but this depends on the rating of the controllers of the power electronic converter.


PROTECTION SCHEMES, FAULTS CURRENTS AND SWITCH GEAR RATINGS


        Protection schemes and switchgear ratings must be checked when connecting new generation capacity. These are independent of the prime mover of the generator. The contribution of wind turbines to the fault currents also differs between the three main wind turbine types. Constant speed turbines are based on a directly grid coupled squirrel cage induction generator. They therefore contribute to the fault current and relay on conventional protection schemes. Turbines based on the doubly fed induction generator also contribute to the fault current.


However, the control system of power electronic converter that controls the rotor current measures fault currents very quickly. Due to the sensitivity of power electronics to over currents, this wind turbine type is currently quickly disconnected when a fault is detected. Wind turbines with a direct drive generator hardly contribute to the fault current because the power electronic converter through which the generator is connected to the grid is not capable of supplying a fault current.


HARMONIC DISTORTION


It is mainly an issue in the case of variable speed turbines because this contains power electronic devices, which are sources of harmonics. Harmonics cause over heating of transformer and generators. This also cause increase in currents through shunt capacitors. Thus leading to failure of such capacitors.


A practical solution would be to provide shunt filters at the PCC of non-linear loads and reduce the harmonic currents flowing all over the network. This would result in lower voltage distortion. In the case of modern power electronic converters with their high switching frequencies and advanced algorithms and filtering techniques, harmonic distortion should not be a principal problem. Well-designed, directly coupled synchronous and asynchronous generators hardly emit harmonics


FLICKER.


Flicker is a specific property of wind turbines. Wind is a quite rapidly fluctuating prime mover. In constant speed turbines, prime mover fluctuations are directly translated into output power fluctuation, because there is no buffer between mechanical input and electrical output. Depending on the strength of the grid connection, the resulting power fluctuations can result in grid voltage fluctuations, which can cause unwanted and annoying fluctuations in bulb brightness. This problem is referred to as flicker.


In general, no flicker problem occur with variable speed turbines, because in these turbines wind speed fluctuations are not directly translated into output power fluctuations. The rotor inertia acts as an energy buff er.


SYSTEM IMPACTS


System-impacts are the impacts that affect the behavior of the system as whole. They are an inherent consequence from the application of wind power but cannot be attributed to individual turbines or farms. They are strongly related to the wind power penetration level in the system, that is the contribution of wind power to actual load.


1. Power system dynamics and stability


2. Reactive power and voltage control.


3. Frequency control and load dispatching of conventional units.


1. POWER SYSTEM DYNAMICS AND STABILITY.


In order to investigate the impact of wind power on power system dynamics and stability, adequate wind turbine models are essential. Squirrel cage induction generator used with constant speed turbine can lead to voltage and rotor speed instability. During a fault, they accelerate due to the unbalance between mechanical power extracted from the wind and electrical power supplied to grid. When the voltage restores, they consume much reactive power, impeding voltage restoration. When the voltage returns to normal value quickly, the wind turbines continue to accelerate and to consume large amounts of reactive power. This eventually leads to voltage and rotor speed instability. Withvariable speed turbines, the sensitivity of the power electronics to over currents caused by voltage drops can have serious consequences for the stability of power systems.


To prevent this, some grid companies and transmission system operators prescribe that wind turbines must able to withstand voltage drops of certain magnitudes and duration, in order to prevent the disconnection of a large amount of wind power during fault .In order to meet this requirements, manufactures of variables speed wind turbines are implementing solutions to reduce the sensitivity of variable speed wind turbines with grid voltage drops.


REACTIVE POWER AND VOLTAGE CONTROL.


The impact of wind power on reactive power generation and voltage control originates first from the fact that not all wind turbines are capable of varying their reactive power output.


First of all wind power cannot be very flexibly located when compared to conventional generation. Secondly wind turbines are relatively weakly coupled to the system because their output voltage is rather low and are often erected at the distinct locations. This further reduces their contribution towards voltage control. When wind turbines at remote locations on a large scale replace the output of conventional synchronous generator, the voltage control aspect must therefore be taken into account explicitly.


Download your Seminar Reports for Wind Power Generation


FREQUENCY CONTROL AND LOAD DISPATCHINGOF CONVENTIONAL UNITS.


The impact of wind power on frequency control and load dispatching is caused by the fact that the prime mover of wind power is uncontrollable. Therefore, wind power hardly ever contributes to primary frequency regulation. Further, the variability of the wind on the longer term tends to complicate the load dispatching with the conventional units that remain in the system, as the demand curve to be matched by these units is far less smooth than would be the case without wind power. This heavily affects the dispatch of power from the conventional generators.


Note that the aggregate short term output power fluctuations of a large number of wind turbines are very smooth and are generally not considered as problem. The impact of wind power on frequency control and load dispatching becomes more severe. The higher the wind power penetration level is. The higher the wind power penetration, the larger the impact of wind power on the demand curve faced by remaining conventional units. It is however, impossible to quantify the wind power penetration level at which system wide effects start to occur because of the differences in demand curve and network topology between various power systems.


The above impacts are solved to some extent. But there is no proper solution to the problems caused by high-speed winds.


During high-speed winds, the turbine speed exceeds its limit. This will cause


1. very high fluctuations in voltage.


2. very high fluctuations in frequencies.


3. It may damage the rotor.


These problems to some extent can be solved by


1. By using some governing mechanism to operate gear mechanism to control the speed of the rotor of the wind turbine.


2. By using computer techniques we may control the speed of the turbine or disconnecting the turbine from generator during high-speed winds.


3. By connecting parachutes to the rotor for blades


CONCLUSION:fundamentals, , review, Presentations, Seminar Topics,Free Reports, PPT, Presentations, Documentation


Even though the production of wind power is problematic, it is not a factor to consider due to large energy crisis. The wind potential in India is about 20,000 MW.But we presently achieved is just a fraction of total potential. If we utilize the potential up to some more extent the energy crisis will be reduced. Still research is going on to design efficient wind turbines.


 Download your Seminar Reports for Wind Power Generation

Report and Abstract for Spinning LED Display

Download your Seminar Reports for Spinning LED Display


What is Spinning LED Displayfundamentals, , review, Presentations, Seminar Topics,Free Reports, PPT, Presentations, Documentation
Discovered by Logan Glasson
Spinning LED Display these receipts programming language C, Microprocessor urlATtiny2313, Trigonometry and sensor chip
SpokePOV, and oddly enough, he went on bike spokes! The downside to this is that the kitset? Although it has a high enough resolution, the software does not contain a utility for converting bitmap images so everything will feature in the program must be drawn SpokePOV.


How it Works
Pov devices, LEDs, can be deadly at times. on every single line of the image can be illuminated but due to the speed of the disk and POV, image display can be distinguished, because the human eye has a ‘shutter speed’ relatively slow 1 \ 50 per second.fundamentals, , review, Presentations, Seminar Topics,Free Reports, PPT, Presentations, Documentation
mounting the motor in the back to achieve a constant spin, optical switches to transmit a beam cahaya.Dalam prototype circuit using 3mm super bright blue LED for wide viewing angles, so that the object appears like the light from the side. And also gave him on the edge off the LED so that they fitted more closely. It delivers higher image


Download your Seminar Reports for Spinning LED Display

Report and Abstract for SMS Based Electronic Notice Board

Download your Seminar Reports for SMS Based Electronic Notice Board


ABSTRACT


Notice Board is primary thing in any institution / organization or public utility places like bus stations, railway stations and parks. But sticking various notices day-to-day is a difficult process. A separate person is required to take care of this notices display. This project deals about an advanced hi-tech wireless notice board. Notice Board is fundamentals, , review, Presentations, Seminar Topics,Free Reports, PPT, Presentations, Documentationprimary thing in any institution / organization or public utility places like bus stations, railway stations and parks. But sticking various notices day-to-day is a difficult process. A separate person is required to take care of this notices display. This project deals about an advanced hi-tech wireless notice board.


SMS Based Electronic Notice Board


The project is built around the 8051 micro controller from Atmel. This micro controller provides all the functionality of the display and wireless control. It also takes care of creating different display effects for given text. Display is obtained on LCD (16X4) Matrix Display Array on a printed circuit board. A GSM/CDMA Mobile Can is used to enter the required text or notice. The scrolling speed of the text also can be changed according to user requirement. After entering the text the SMS is sent to the no which is connected to the LCD display. At any time the user can add or remove or alter the text according to his requirement. At the receiving end the GSM modem which is connected to the Max 232 receives the message and is connected to the microcontroller 8051. The message is displayed on the LCD (16X4) Matrix Display Array. This project uses regulated 5V, 500mA power supply. 7805 three terminal voltage regulator is used for voltage regulation. Bridge type full wave rectifier is used to rectify the ac output of secondary of 230/12V step down transformer.


INTRODUCTION


Notice Board is primary thing in any institution / organization or public utility places like bus stations, railway stations and parks. But sticking various notices day-to-day is a difficult process. A separate person is required to take care of this notices display. This project deals about an advanced hi-tech wireless notice board. An embedded system is a combination of software and hardware to perform a dedicated task. Some of the main devices used in embedded products are Microprocessors and Microcontrollers, Microprocessors are commonly referred to as general purpose processors as they simply accept the inputs, process it and give the output. In contrast, a microcontroller not only accepts the data as inputs but also manipulates it, interfaces the data with various devices, controls the data and thus finally gives the result. As everyone in this competitive world prefers to make the things easy and simple to handle, this project sets an example to some extent.


Now a day’s every advertisement is going to be digital. The big shops and shopping centers are using the digital moving displays now. In Railway station and bus stands everything that is ticket information, platform number etc is displaying in digital moving display. But in these displays if they wants to change the message or style they have to go there and connect the display to PC or laptop. Suppose the same message if the person wants to display in main centers of the cities means he have to go there with laptop and change the message by connecting into PC. This project we can use mainly for police or army .I.e. displays will be connected to all the main centers in city if they wants to display messages about something crucial within 5 minute, they can’t .So keeping in this mind we are designing a new display system which can access remotely, we are using the GSM technology to access the display’s is one of the new technology in the embedded field to make the communication between microcontroller and mobile. This project is a remote notice board with MODEM connected to it, so if the user wants to display some messages, he will send the messages in SMS format the MODEM in the display system will receive the message and update the display according to the message. For every message received from the user mobile, the system will check for the password and if the password is correct the controller will display the message


“GSM based Control System” implements the emerging applications of the GSM technology. Using GSM networks, a control system has been proposed that will act as an embedded system which can monitor and control appliances and other devices locally using built-in input and output peripherals.


GSM (Global System for Mobile Communications): It is a cellular communication standard.


SMS (Short Message Service): It is a service available on most digital mobile phones that permit the sending of short messages (also known as text messaging service).


The scope of this project is to introduce a new technology for notice board display system using GSM. A user can send a message from anywhere in the world. Multilingual display can be another added variation of the project. The display boards are one of the single most important media for information transfer to the maximum number of end users. This feature can be added by programming the microcontroller fundamentals, , review, Presentations, Seminar Topics,Free Reports, PPT, Presentations, Documentationto use different encoding decoding schemes in different areas as per the local language. This will ensure the increase in the number of informed users. Graphical display can also be considered as a long term but achievable and target able output. MMS technology along with relatively high end microcontrollers to carry on the tasks of graphics encoding and decoding along with a more expansive bank of usable memory can make this task a walk in the park.


Advantages:

A lot of interaction and information sharing occurs.No printing & photocopying costs.No manual effort.Helps to retain and develop the knowledge base of your college or office.Saves Time, Energy and finally Environment.

Download your Seminar Reports for SMS Based Electronic Notice Board

Report and Abstract for Kyoto Protocol

Download your SeminarReports for Kyoto Protocol

The Kyoto Protocol is a protocol to the United Nations Framework Convention on Climate Change (UNFCCC or FCCC), aimed at fighting global warming.fundamentals, , review, Presentations, Seminar Topics,Free Reports, PPT, Presentations, Documentation The UNFCCC is an international treaty adopted in 1992 in Rio de Janeiro that provided strategies and mandatory targets for the reduction of greenhouse gas emissions for all signatoriesThe UNFCCC is an international environmental treaty with the goal of achieving “stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system.”

Participation in the Kyoto Protocol, as of December 2010, Green = Countries that have signed and ratified the treaty (Annex I & II countries in dark green)
Grey = Countries that have not yet decided
Brown = No intention to ratify at this stage.


Kyoto Protocol


History of kyoto protocol:

In the 1980s & early 1990s, the issue of global warming came to the forefront of international politics. In 1992, 154 countries, including Canada, signed the United Nations Framework Convention on Climate Change in Rio de Janeiro, Brazil.The central element of the Convention was a commitment to stabilize greenhouse gas levels in the atmosphere within a timeframe that would be sufficient to allow ecosystems to adapt naturally to climate change.Nations agreed, moreover, that developed countries (countries with modern, fully developed economies) were to take a leadership role in reducing greenhouse gas emissions.Implicit in this understanding was the recognition that developed nations had been the primary greenhouse gas emitters over the last century, and that emission stabilization would be more problematic for non-developed or developing countriesAt the third Conference of the Parties, held in Kyoto, Japan, member countries signed the Kyoto Protocol. The 1997 Protocol document was a comprehensive agreement that included precise greenhouse gas emission targets for each member country, the general framework of a greenhouse gas emissions-trading programThe 1997 agreement also provide a specific procedure for bringing the Protocol into full force and effect. The Protocol would have to be formally ratified by at least 55 industrialized nations accounting for a minimum of 55 percent of the total global greenhouse gas emissions produced in 1990.In 1997, the Clinton Administration committed the United States to the Kyoto Protocol agreement, both as a signatory and as an active participant in its implementation negotiations. In 2001, following the election of George W. Bush in 2000, the Bush Administration announced a change in direction for the US; the US would no longer be formally ratifying the agreement. While accepting the general principles of global warming, and the need for international cooperation to reduce levels of greenhouse gases in the earth’s atmosphere,the Bush Administration was highly critical of many of the Protocol’s components, in particular the exemption granted to China, the second largest emitter of greenhouse gases (after the United States). Under the Protocol, China is recognized as a “developing nation” and is, accordingly, exempt from emission reduction targets.The Bush Administration has also expressed concerns over uncertainty in the precise impacts of global warming, as well as the potential impacts of the Protocol on the US economy.By January 2004, several countries had ratified the Kyoto Protocol, including Japan, Canada, New Zealand, and most European signatories. Collectively, these ratifying countries represented approximately 44 percent of the total greenhouse gas emissions produced in 1990 – only 11 percent shy of the 55 percent target cited in the Protocol’s terms. The deciding factor in the eventual implementation of the Protocol was Russia, which represented 17 percent of total 1990 emissions.Russia had been unclear about whether it would ratify the Protocol, However, in November 2004, Russian President Vladimir Putin announced his government would indeed pass the agreement, ensuring the Protocol would come into effect in 2005. On February 16, 2005, the Kyoto Protocol formally came into effect, committing key industrialized countries, including Canada, to specific targets for reducing or limiting their greenhouse gas emissions between 2008 and 2012Parties to UNFCCC are classified asAnnex I countries – industrialized countries and economies in transitionAnnex II countries – developed countries which pay for costs of developing countriesNon Annex I countries – Developing countriesAnnex II countries are a sub-group of the Annex I countriesA transition economy or transitional economy is an economy which is changing from a centrally planned economy to a free market. Transition economies undergo economic liberalization, where market forces set prices rather than a central planning organization and trade barriers are removed, privatization of government-owned enterprises and resources, and the creation of a financial sector to facilitate macroeconomic stabilizationThe Kyoto Protocol requires 55 industrialised countries to reduce their greenhouse gas emissions to target levels 5.2% below that of 1990. If unable to, they must buy emission credits from countries that are under these levelsit provides that developed countries pay for costs of developing countries.Developing countries have no requirements under the Protocol. They may sell emission credits and receive funds and technology from Annex II countries for climate-related studies and projects. Many Annex I and Annex II countries overlapAnnex I countriesThere are 40 Annex I countries and the European Union is also a member. These countries are classified as industrialized countries and countries in transition:Australia, Austria, Belarus, Belgium, Bulgaria, Canada, Croatia, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Italy, Japan, Latvia, Liechtenstein, Lithuania, Luxembourg, Monaco, Netherlands, New Zealand, Norway, Poland, Portugal, Romania, Russian Federation, Slovakia, Slovenia, Spain, Sweden, Switzerland, Turkey, Ukraine, United Kingdom, United States of AmericaAnex II countries.There are 23 Annex II countries and the European Union.These countries are classified as developed countries which pay for costs of developing countries:Australia, Austria, Belgium, Canada, Denmark, Finland, France, Germany, Greece, Iceland, Ireland, Italy, Japan, Luxembourg, Netherlands, New Zealand, Norway, Portugal, Spain, Sweden, Switzerland, United Kingdom, United States of AmericaNotable exceptions remains the USA as a major emitter of greenhouse gases. Australia signed the Treaty on the 3rd December 2007The Protocol was initially adopted on 11 December 1997 in Kyoto, Japan and entered into force on 16 February 2005. As of November 2009, 187members have signed and ratified the protocolThe major feature of the Kyoto Protocol is that it sets binding targets for industrialized countries for reducing greenhouse gas (GHG) emissions..As of August 2011, 191 members have signed and ratified the protocol. The only remaining signatory not to have ratified the protocol is the United States. Other states yet to ratify Kyoto include Afghanistan, Andorra and South Sudan, after Somalia ratified the protocol on 26 July 2010.Under the Protocol, 39 industrialized countries and the European Union(called “Annex I countries”) commit themselves to a reduction of four greenhouse gases (GHG) (carbon dioxide, methane, nitrous oxide, sulphur hexafluoride) and two groups of gases (hydrofluorocarbons and perfluorocarbons) produced by them, and all member countries give general commitments.Annex I countries agreed to reduce their collective greenhouse gas emissions by 5.2% from the 1990 level.Emission limits do not include emissions by international aviation and shipping, but are in addition to the industrial gases, chlorofluorocarbons, or CFCs, which are dealt with under the 1987 Montreal Protocol on Substances that Deplete the Ozone Layer.The benchmark 1990 emission levels were accepted by the Conference of the Parties of UNFCCC  were the values of “global warming potential” calculated for the IPCC Second Assessment Report. These figures are used for converting the various greenhouse gas emissions into comparable CO2 equivalents (CO2-eq) when computing overall sources and sinksSome targets for some countries are higher than for others, depending on their emission status.For instance, the emission cut target for the European Union is set at 8% and 7% for the USA.

Top-ten emitters:

Ranking of the world’s top ten emitters of GHGs for 2005. The first figure is the country’s or region’s emissions as a percentage of the global total.China – 17%United States – 16%European Union – 11%Indonesia – 6%India – 5%Russia– 5%Brazil – 4%Japan – 3%Canada – 2%Mexico – 2%Key Dates 1992  -  UNFCCC adopted providing targets and framework for international regulation of greenhouse gases.12/11/97   -  Kyoto Protocol is finalized and Opened For Signature—55 parties accounting for at least 55% of 1990 GHG emissions necessary for Kyoto Protocol to enter into force.2/16/05 -  Kyoto Protocol enters into force as Russia ratifies the Kyoto Protocol.12/06   -  169 Countries and Governmental Entities Have Ratified the Kyoto Protocol.D2008-12 -   deadline for Annex I Countries To Reduce GHG emissions to target levels. Each country assigned target. Average reduction is 5% below 1990 Levels

Objectives:

The objective is the “stabilization and reconstruction of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system.objective of the Kyoto climate change conference was to establish a legally binding international agreement, whereby all the participating nations commit themselves to tackling the issue of global warming and greenhouse gas emissions.The target agreed upon was an average reduction of 5.2% from 1990 levels by the year 2012.According to the treaty, in 2012, Annex I countries must have fulfilled their obligations of reduction of greenhouse gases emissions established for the first commitment period (2008–2012)

Principle Concepts:

1.Commitments. The heart of the Protocol lies in establishing commitments for the reduction of greenhouse gases that are legally binding for Annex I countries, as well as general commitments for all member countries. 2.Implementation. In order to meet the objectives of the Protocol, Annex I countries are required to prepare policies and measures for the reduction of greenhouse gases in their respective countries. In addition, they are required to increase the absorption of these gases and utilize all mechanisms available, such as joint implementation, the clean development mechanism and emissions trading, in order to be rewarded with credits that would allow more greenhouse gas emissions at home

                3.Minimizing Impacts on Developing Countries by establishing an adaptation fund for climate change.


                4.Accounting, Reporting and Review in order to ensure the integrity of the Protocol.


                5.Compliance.

 Establishing a Compliance Committee to enforce compliance with the commitments under the Protocol.

Kyoto Mechanisms:

Under the Treaty, countries must meet their targets primarily through national measures. However, the Kyoto Protocol offers them an additional means of meeting their targets by way of three market-based mechanisms.They areEmissions trading – known as “the Carbon market”  Clean development mechanism (CDM) Joint implementation (JI). Important element of the Kyoto Protocol is its flexibility mechanisms. These enable participating nations to achieve their emission targets by means other than simply reducing their own national emissions of greenhouse gases – hence, the term “flexibility mechanisms.” The Protocol provides for three such mechanisms Clean Development: This mechanism allows developed (or Annex 1) nations to receive emission credits towards their own emission targets by participating in certain projects in developing (or Non-annex 1) countries. These Clean Development projects must be approved by members of the Protocol and must contribute to sustainable development and greenhouse gas emission reductions in the host developing country. Clean Development MechanismBetween 2001, which was the first year CDM projects could be registered, and 2012, the end of the Kyoto commitment period, the CDM is expected to produce some 1.5 billion tons of carbon dioxide equivalent (CO2e) in emission reductions. Most of these reductions are through renewable energy, energy efficiency, and fuel switching  By 2012, the largest potential for production of CERs are estimated in China (52% of total CERs) and India (16%). CERs( Certified emission reduction) produced in Latin America and the Caribbean make up 15% of the potential total, with Brazil as the largest producer in the region (7%).Under the Clean Development Mechanism, for example, the Annex 1 nation receives emission credits for reducing greenhouse gas emission in a developing nation. Hence, while emissions in the Annex 1 nation have in actuality remained the same, overall global emissions have been reduced.Joint Implementation: This mechanism allows Annex 1 nations to receive emission credits towards their own emission targets by participating in certain projects with other Annex 1 nations. These Joint Implementation projects must be approved by all nations participating in the project, and must either reduce greenhouse gas emissions or contribute to enhanced greenhouse gas removal through emission sinks (i.e. reforestation). Joint ImplementationThe formal crediting period for JI was aligned with the first commitment period of the Kyoto Protocol, and did not start until January 2008 (Carbon Trust, 2009, p. 20). In November 2008, only 22 JI project had been officially approved and registered. The total projected emission savings from JI by 2012 are about one tenth that of the CDM. Russia accounts for about two-thirds of these savings, with the remainder divided up roughly equally between the Ukraine and the EU’s New Member States. Emission savings include cuts in methane, HFC, and N2O emissions.Emissions Trading: This mechanism allows Annex 1 nations to purchase emission ‘credits’ from other Annex 1 countries. Some countries will be below the emission targets assigned to them under the Protocol and, as such, will have spare emission credits. Under the emissions trading system, other nations may purchase these spare credits and use them towards their own emission targets. International Emissions TradingThe most advanced emissions trading system (ETS) is the one developed by the EU  suggested that during its first two years in operation, the EU ETS turned an expected increase in emissions of 1-2 percent per year into a small absolute decline.The CDM and JI are called “project-based mechanisms,” in that they generate emission reductions from projects. The difference between IET and the project-based mechanisms is that IET is based on the setting of a quantitative restriction of emissions, while the CDM and JI are based on the idea of “production” of emission reductions.The CDM is designed to encourage production of emission reductions in non-Annex I countries, while JI encourages production of emission reductions in Annex I countries.

2012 emission targets and “flexible mechanisms”

39 of the 40 Annex I countries have ratified the Protocol. Of these 34 have committed themselves to a reduction of greenhouse gases (GHG) produced by them to targets that are set in relation to their 1990 emission levels, The targets apply to the four greenhouse gases carbon dioxide, methane, nitrous oxide, sulphur hexafluoride, and two groups of gases, hydrofluorocarbons and perfluorocarbons. The six GHG are translated into CO2 equivalents in determining reductions in emissions. Carbon tradingCarbon trading or emission trading is an administrative approach used to control pollution by providing economic incentives for achieving reductions in the emissions of pollutants.ProcedureA central authority sets a limit / cap on amount of pollutant that can be emitted.company/ industry are issued emission permits and are required to hold an equivalent number of allowances ( credits)Total amount of allowances and credits cannot exceed the capThe transfer of allowances and credits is referred as a trade. Buyer is paying a charge for  polluting while seller is being rewarded for having reduced emissions.Those can easily reduce emissions most cheaply will achieve pollution reduction at lower possible cost.This system is called as cap and trade or carbon trading. AdvantagesBetter approach than direct regulation.Can be cheaper and politically preferable for existing industries as earlier allowances are allocated within proportion to historical emissions.Most of the money in the system is spent on environmental activities.How Co2 is tradedCER is sold at a price negotiated between buyer and seller.CER is offered with a guarantee of delivery regulator.According to World Bank 374MMT of Co2 were exchanged through the projects in 2005. How Does the Kyoto Protocol Work? 1. The world is divided into two categories: Annex I Countries (Developed Countries) and Annex II Countries (Developing Countries). The U.S., EU countries, Japan, etc., are Annex I. China, India and others are Annex II. 2. Each Annex I country is assigned a target emissions reduction relative to its 1990 GHG emissions. The country must meet this target for the period between 2008 and 2012. This will be a calculated average over a five-year period. Annex II countries have no emissions reduction targets but are encouraged to adopt environmentally friendly technologies to reduce GHG emissions Annex I countries can meet the target in one of three ways: (a) actual emissions reduction from sources within its borders; (b) the purchase of emission reduction credits on financial exchanges from other signatory countries; or (c) participation in Clean Development Mechanism (CDM) projects that generate emission reduction credits in Annex II countries. Note that the credits must be Certified Emission Reductions (CERs) approved by the CDM Executive Board. Enforcement/Penalty—Failure to meet the targets will result in having a 30% penalty on excess emissions. 5. Most signatories that have ratified the Kyoto Protocol have established Designated National Authorities to develop, adopt and enforce the Kyoto process, including the CDMs. Success of kyoto protocol.not having the USA ratify the Kyoto Protocol is a big problem as the USA also roughly contributes a quarter of the world’s greenhouse gases.A number of countries have not so far met the Kyoto Protocol emission targets.current projections call for the need of much bigger cuts in emissions than the Kyoto Protocol requires.The United Nations now predict a rise of 10% in greenhouse emissions since 1990.The Kyoto Protocol is a unique international initiative that recognises the dire environmental straits that we are in. Its processes seem painfully slow and its results small against daily reports of serious global warming effects.However its symbolic value may be its greatest asset. Any effort is better than none and if governments are slow, people everywhere are doing what they can do. Recycling, green power, wearing a jumper rather than turning up the heater, and so on. Some local governments are not waiting for their national governments to come to the party and introduce their own individual carbon trading schemes or offer incentives for solar heating

India and Kyoto Protocol:

India, whose economy has grown by 8-9 per cent a year in recent years, is one of the world’s top polluters, contributing around 4-5 % of global greenhouse gas emissions as its consumption of fossil fuels gathers pace. As a developing nation, India is not required to cut emissions — said to be rising by between 2 and 3 per cent a year — under the Kyoto Protocol, despite mounting pressure from environmental groups and industrialised nations. India made it clear on sept 16, 2011 that it wanted extension of the current Kyoto Protocol on emission cuts, but said it would not accept any further legally binding emission framework. As a developing country, India has already taken substantial and ambitious actions at great cost. The issue of a legally binding agreement has acquired huge political sensitivities in IndiaAlthough around 80 per cent of world growth in carbon emissions is coming from fast growing economies like India and China, India has argued that even if India’s economy continues to grow at current levels for the next decade or two, its per capita emissions would still be below those of the developed countries. fundamentals, , review, Presentations, Seminar Topics,Free Reports, PPT, Presentations, DocumentationWithout any financial and technological assistance, states like India will not be willing to open their efforts at greenhouse emissions reductions to international verification. Climate change talks not only involve competing economic interests but also raise matters of broad principle for the West’s relationship with developing nationsIndia has committed itself to a mandatory fuel efficiency cap to begin in 2011, a change in its energy matrix whereby renewable sources will account for 20 per cent of India’s power usage by 2020 as well as announced an ambitious solar energy plan.

 Download your SeminarReports for Kyoto Protocol

Report and Abstract for Fiber Distributed Data Interface

Download your Seminar Reports for Fiber Distributed Data Interface


Fiber Distributed Data Interface
FDDI (Fiber Distributed Data Interface) is a set of ANSI and ISO standards for data transmission in computer networks or wide area network (LAN) via fiber optic cable. Architecture is based on the token ring and allows full duplex communication type. As can supply thousands of users, an FDDI LAN is often used as backbone for a wide area network (WAN).


Fiber Distributed Data Interface
There is also an implementation of FDDI in copper wire cables known as CDDI. Ethernet technology to 100 Mbps (100BASE-FX and 100BASE-TX) is based on FDDI.
OPERATION
An FDDI network uses two token ring architectures, one as support in case the primary fails. In each ring, the data traffic occurs in the opposite direction to the other. Using one of these rings the speed is 100 Mbps and range of 200 km, with both speed up to 200 Mbps but the low range to 100 km. The method of operation is very similar FDDI token ring to, however, the larger size of its rings leads to its upper and latency is a raster may be circulating in a ring at the same time.
FDDI was designed in order to achieve a real-time system with a high degree of reliability. It was considered as a design goal of virtually error-free transmission. That is why, among other things, that it was decided by the optical fiber as a medium for FDDI. It also specified that the total error rate of complete ring FDDI should not exceed an error every 1e9 bits (ie an error on gigabit) with a packet loss fundamentals, , review, Presentations, Seminar Topics,Free Reports, PPT, Presentations, Documentationrate of data that exceeds either 1e9. In the case that a failure in a station or a wire breakage is avoided the problem area automatically without user intervention, by what is known as “back curve” (wrapback). This occurs when a fault is detected FDDI ring and drives traffic to the secondary ring so you can reconfigure the network. All stations are operating properly remain online and unchanged. As soon as the problem is corrected, restoring service in that area.
There are various devices for the management and use of an FDDI network:
• Simple docking station (SAS) (Single Attachment Station) are usually servers or routers that connect to both rings. A SAS implements a single type MIC S. Normally connected through a single segment to a hub transmission that implements a type MIC M. It contains an entity SMT, MAC sublayer entity, and a port with a MIC of type S.

• docking stations or Dual-Doubles (DAS) (Dual Attachment Station) are designed for connecting separate segments of transmission means full-duplex, two rings. A station has dual SMT entity, one or more entities of the MAC sublayer, and exactly two ports. Each port has its own associated MIC. When each MIC is properly connected, forming two rings logical and physical.

• Simple Connection Concentrator (SAC) (Single Attachment Concentrator) is not very reliable because it makes a simple connection. Can be used to create a hierarchical tree structure.

• Dual Connection Concentrator (DAC) (Dual Attachment Concentrator) A hub with additional ports, in addition to the need for connection to the network. Additional ports can be used for connecting to other network stations. Using a dual hub or double connections, you get a station that has three or more ports, each associated with its own MIC.

• Connection concentrator-null (NAC) (Null Attachment Concentrator). It is also possible to have a network formed only by a tree structure without double ring. In such a configuration, the concentrator is a concentrator highest void connections, NAC. A NAC has no connectors type A or B to connect to the double ring or S-type connectors to join a top-level hub. MIC’s has only type M, for connecting stations and concentrators lower level data.Fiber Distributed Data Interface Seminar
FDDI specifies the physical layer and the data link layer of the OSI model, but not a single specification, but a set of four specifications isolated, each with a specific function. Together, these specifications have the ability to provide high-speed connectivity between upper layers such as TCP / IP and IPX and a medium such as fiber optic cabling. The four FDDI specifications are:
• Specifying MAC (Media Access Control) defines how the medium is accessed, including frame format, token handling, addressing, algorithms for calculating the value of CRC (cyclic redundancy check), and error recovery mechanisms .
• The specification PHY (Physical Layer Protocol) defines the procedures for data encoding and decoding, timing requirements (clocking), and framing, among other functions.
• The specification PMD (Physical-Medium Dependent) defines the characteristics of the transmission medium, including fiber optic links, power levels, bit error rates, optical components and connectors.
• The specification SMT (Station Management) defines FDDI station configuration, ring configuration, ring control features, including insertion and removal, initialization, fault isolation, planning and statistics collection.


History
FDDI began to be developed by the ANSI standards committee X3T9.5 in 1983. Each of its enhanced fundamentals, , review, Presentations, Seminar Topics,Free Reports, PPT, Presentations, Documentationspecification was designed and culminating with SMT in 1994. The reason for its existence was to be an alternative LAN ethernet and token ring ofreciese also increased reliability. Today, because of its superior speed, cost and ubiquity, it is preferred to use Fast Ethernet and Gigabit Ethernet instead of FDDI.


Download your Seminar Reports for Fiber Distributed Data Interface

Twitter Delicious Facebook Digg Stumbleupon Favorites More

 
Design by Amita raj | Bloggerized by Raja - Bput world | coupon codes