Sunday, December 22, 2019

Outline Of A Social Awareness - 1691 Words

Social awareness Essay Sights that will help with grammer and any type of errors : https://www.paperrater.com/ Topic: Autism Introduction Attention getter: Autism affects every 1 in 68 children; It is one of the fastest growing developmental disorders in the u.s (Facts about Autism. Autism Speaks. N.p., n.d. Web. 11 Mar. 2016. Minor details about the issue Autism FAQ - History. Autism FAQ - History. N.p., n.d. Web. 11 Mar. 2016. . history Thesis: BodyX3 Topic sentence: The definition/ characteristic Autism Characteristics. List of Autism Characteristics. N.p., n.d. Web. 11 Mar. 2016. . Featured Conferences. Welcome to Temple Grandin s Official Autism Website. N.p., n.d. Web. 11 Mar. 2016. . Spectrem/†¦show more content†¦Each autistic child was somewhere on the autistic spectrum; each child with autism has different characteristics, but extremely similar symptoms; and each child, if diagnosed early, has a chance at a semi-normal life. if they go through the proper treatments at a young age. This will be exposing the characteristics, the spectrum and the treatments of autism. Autism is known as a mental disability due to difficulty in communicating and forming other relationships with other people, this is one of the major characteristics of autism. Autism original meaning ment ‘to escape from reality’ (Autism FAQ). Many people believed that autistic children were, infact, attempting to escape reality. It was also first discovered officially by Leo Kanner, who published his first paper Identifying children with autism in 1943 (Autism FAQ). Autistic children at that time were believed to be emotionally disturbed or ‘retarded.’ What they did not understand however, is that there was a widened range of different types of autism and different characteristics. Here is an example of when Amy and Dan McGillivray were asked about their child s characteristics and how it affected them. †Our daughter did not like to play with other children, she prefered to be in her room, She was unable to speak until she was the age of 4, an d she did not play with toys as a normal child would. Instead she would line them up by their

Friday, December 13, 2019

Role of Youth in Eradicating Corruption Free Essays

string(70) " lets in laser light just as well and remains unaffected by the beam\." Laser and its medical applications Presented by S. vignesh J. sabastian The Advent of the â€Å"Laser Scalpel† Early experimenters with medical lasers pointed out that there are surgical operations that are difficult to perform with the conventional scalpel and that a laser beam might be used instead. We will write a custom essay sample on Role of Youth in Eradicating Corruption or any similar topic only for you Order Now Initial trials showed that a finely focused beam from a carbon dioxide gas laser could cut through human tissue easily and neatly. The surgeon could direct the beam from any angle by using a mirror mounted on a movable metal arm. Several advantages of laser surgery quickly became apparent. First, the light beam is consistent, which means that it gives off the same amount of energy from In this photo taken during open-heart surgery, a doctor uses a laser probe to punch small holes in the patient’s heart muscle to increase the organ’s blood flow. one second to the next. So as long as the beam is moving along, the cut it makes (the incision) does not vary in depth; whereas when using a scalpel a doctor can accidentally make part of the incision too deep. A second advantage of the surgical laser is that the hot beam cauterizes, or seals off, the open blood vessels as it moves along. This works well mainly for small vessels, such as those in the skin. The doctor still has to seal off the larger blood vessels using conventional methods. ) Still another advantage is that the cells in human tissue do not conduct heat very well, so the skin or any other tissue near the laser incision does not get very hot and is not affected by the beam. This advantage of laser su rgery is very helpful when a doctor must operate on a tiny area that is surrounded by healthy tissue or organs. It should be pointed out that the â€Å"laser scalpel† is not necessarily the best tool to use in every operation. Some doctors feel that while the laser is useful in some situations, it will never totally replace the scalpel. Others are more optimistic and see a day when more advanced lasers will make the scalpel a thing of the past. The second of these views may prove to be the most accurate, for surgical use of lasers is rapidly advancing. At first, lasers were considered most effective in operating on areas that are easy to reach—areas on the body’s exterior, including the skin, mouth, nose, ears, and eyes. But in recent years doctors have demonstrated remarkable progress in developing laser techniques for use in internal exploration and surgery. Of course, in order to be able to direct the laser beam the doctor must be able to see inside the body. In some cases this is a simple matter of making an incision and opening up the area to be operated on. But there are situations in which this step can be avoided. Cleaning Arteries with Light For instance, lasers are increasingly used to clean plaque from people’s arteries. Plaque is a tough fatty substance that can build up on the inside walls of the arteries. Eventually the vessels can get so clogged that blood does not flow normally, and the result can be a heart attack or stroke, both of which are serious and sometimes fatal. The traditional method for removing the plaque involves opening the chest and making several incisions, a long and sometimes risky operation. It is also expensive and requires weeks for recovery. An effective alternative is to use a laser beam to burn away the plaque. The key to making this work is the doctor’s ability to see inside the artery and direct the beam, another area in which fiber optics and lasers are combined into a modern wonder tool. An optic fiber that has been connected to a tiny television camera can be inserted into an artery. These elements now become a miniature sensor that allows the doctor and nurses to see inside the artery while a second fiber is inserted to carry the bursts of light that will burn away the plaque. The technique works in the following way. The fiber-optic array is inserted into a blood vessel in an arm or leg and moved slowly into the area of the heart and blocked arteries. When the array is in place the laser is fired and the plaque destroyed, and then the exhaust vapors are sucked back through a tiny hollow tube that is inserted along with the optical fibers. When the artery has been cleaned out the doctor removes the fibers and tube, and the operation is finished. This medical process is known as laser angioplasty. It has several obvious advantages. First, no incision is needed (except for the small one in the vessel to insert the fibers). There is also little or no bleeding, and the patient can enjoy total recovery in a day or two. Laser angioplasty does have some potential risks that must be considered. First, when the laser beam fires at the plaque it must be aimed very carefully ecause a slight miss could cut through the wall of the artery and cause serious bleeding. The patient’s chest would then have to be opened up after all. Another problem involves small pieces of burnt debris from the Surgeons use a tiny laser to cut away tissue in a gallbladder operation. The laser and a tiny camera are inserted into the navel, so no abdominal incision is necessary. . Lasers Heal and Reshape the Eyes Some of the most remarkable breakthr oughs for medical lasers have been in the area of ophthalmology, the study of the structure and diseases of the eye. One reason that laser beams are so useful in treating the eye is that the cornea, the coating that covers the eyeball and admits light into the interior of the eye, is transparent. Since it is designed to admit ordinary light, the cornea lets in laser light just as well and remains unaffected by the beam. You read "Role of Youth in Eradicating Corruption" in category "Essay examples" First, the laser is very useful in removing extraneous blood vessels that can form on the retina—the thin, light-sensitive membrane at the back of the eyeball. It is on the retina that the images of the things the eye sees are formed. Damage to the retina can sometimes cause blindness. The laser most often used in the treatment of this condition is powered by a medium of argon gas. The doctor aims the beam through the cornea and burns away the tangle of blood vessels covering the retina. The procedure takes only a few minutes and can be done in the doctor’s office. The laser can also repair a detached retina—one that has broken loose from the rear part of the eyeball. Before the advent of lasers detached retinas had to be repaired by hand, and because the retina is so delicate this was a very difficult operation to perform. Using the argon laser, the doctor can actually â€Å"weld† the torn retina back in place. It is perhaps a strange coincidence that Gordon Gould, one of the original inventors of the laser, later had one of his own retinas repaired this way. Another condition that affects the eye is glaucoma, which is characterized by the buildup of fluid in the eye. Normally the eye’s natural fluids drain away a little at a time, and the eye stays healthy. In eyes impaired with glaucoma the fluid does not drain properly, and the buildup affects vision; blindness can sometimes result. In some cases drugs can be used to treat glaucoma. If the drugs fail, however, many doctors now turn to the laser to avoid onventional surgery. The laser punches a hole in a preplanned spot and the fluid drains out through the hole. Again, the treatment can be performed in a doctor’s office instead of a hospital. Using Lasers for Eye Surgery The laser works like a sewing machine to repair a detached retina, the membrane that lines the interior of the eye. The laser beam is adjusted so that it ca n pass harmlessly through the lens and focus on tiny spots around the damaged area of the retina. When it is focused, the beam has the intensity to â€Å"weld† or seal the detached area of the retina back against the wall of the eyeball. The patient’s eyeglass prescription is literally carved inside the cornea with the beam of an excimer laser [a laser device that produces pulses of ultraviolet, or UV, light]. A small flap of the cornea is first removed with a precision knife . . . and an A patient undergoes eye surgery performed by a laser beam. In addition to treating detached retinas, lasers can remove cataracts. inner portion of the cornea is exposed to the excimer laser. After the prescription is carved, the corneal flap that was opened is then put back into place over the ablated [surgically altered] cornea. 6 LASIK does not come without risks. The changes it makes in the cornea are permanent, and the danger of unexpected damage is ever present. However, the procedure has become increasingly popular each year; about a million Americans had it done in the year 2000, and about four thousand surgeons in the United States were trained to perform it. Some Cosmetic Uses of Lasers Medical lasers are also widely used for various types of cosmetic surgery, including the removal of certain kinds of birthmarks. Port-wine stains, reddish purple skin blotches that appear on about three out of every one thousand children, are an example. Such stains can mark any part of the body but are most commonly found on the face and neck. The medical laser is able to remove a port-wine stain for the same reason that a military laser is able to flash a message to a submerged submarine. Both lasers take advantage of the monochromatic quality of laser light, that is, its ability to shine in one specific color. The stain is made up of thousands of tiny malformed blood vessels that have a definite reddish purple color. This color very strongly absorbs a certain shade of green light. In fact, that is why the stain looks red. It absorbs the green and other colors in white light but reflects the red back to people’s eyes. To treat the stain, the doctor runs a wide low-power beam of green light across the discolored area. The mass of blood vessels in the stain absorbs the energetic laser light and becomes so hot that it is actually burned away. The surrounding skin is a different color than the stain, so that skin absorbs only small amounts of the beam and remains unburned. (Of course, the burned A doctor uses an argon laser to remove a port-wine stain, a kind of birthmark. Unwanted tissue is burned away while normal skin remains undamaged. areas must heal, and during this process some minor scarring sometimes occurs. ) Laser-Assisted Dentistry Dentistry is another branch of medicine that has benefited tremendously from laser technology. Indeed, lasers have made some people stop dreading a visit to the dentist. No one enjoys having a cavity drilled, of course. It usually requires an anesthetic (a painkiller like novocaine) that causes uncomfortable numbness in the mouth; also, the sound of the drill can be irritating or even sickening to some people. Many dentists now employ an Nd-YAG laser (which uses a crystal for its lasing medium) instead of a drill for most cavities. The laser treatment takes advantage of the simple fact that the material that forms in a cavity is much softer than the enamel (the hard part of a tooth). The laser is set at a power that is just strong enough to eliminate the decayed tissue but not strong enough to harm the enamel. When treating a very deep cavity bleeding sometimes occurs, and the laser beam often seals off blood vessels and stops the bleeding. The most often asked question about treating cavities with lasers is: Does it hurt? The answer is no. Each burst of laser light from a dental laser lasts only thirty-trillionths of a second, much faster than the amount of time a nerve takes to trigger pain. In other words, the beam would have to last 100 million times longer in order to cause any discomfort. So this sort of treatment requires no anesthetic. Advantages of Lasers for Dental Surgery In this excerpt from an article in The Dental Clinics of North America Robert A. Strauss of the Medical College of Virginia mentions some of the advantages of using lasers for oral surgery. Decreased post-operative swelling is characteristic of laser use [for oral surgery]. Decreased swelling allows for increased safety when performing surgery within the airway [the mouth] . . . and increases the range of surgery that oral surgeons can perform safely without fear of airway compromise. This effect allows the surgeon to perform many procedures in an office or outpatient facility that previously would have required hospitalization. . . . Tissue healing and scarring are also improved with the use of the laser. . . . Laser wounds generally heal with minimal scar formation and . . . often can be left unsutured [without stitches], another distinct advantage. Thus the role of laser in medical field is most predominant. How to cite Role of Youth in Eradicating Corruption, Essay examples

Thursday, December 5, 2019

Importance of Technology free essay sample

Transcript 1= luv u 4 ever 🙂 †¢ Transcript 2= u r 2 sweet 2 b 4got10 can u cum c me face2face †¢ Transcript 3= I h8 u!!! †¢ Transcript 4= Jake ur bag is pukka †¢ Transcript 5=iv been chatin with my penpal all day †¢ Transcript 6= how ya doin! †¢ Transcript 7=Jake-â€Å"r ur headphones good† Demal-â€Å"yh their awesome FYI they where only $5. 99 Ali Nasir10BMR. Wotson Introduction †¢ One of the forms of multimodal talk is texting †¢ Texting has captivated a whole generation of young people †¢ Texting has become universal, it is practiced all over the world †¢ Texting is done from mobile to mobile, by sending the text to the mobile number. It can also be sent to many at the same time †¢ Texting is thought to be mostly used by the young people and teenagers †¢ The older generations feel that texting has taken the ability of writing and correct spelling to zero, they deplore what texting has done to the English language †¢ Texting is also done as a means of advertising †¢ Large companies text to anyone they can †¢ Doctor surgeries and even schools like the one I study at also use texting to inform patients and parents relative information. †¢ The language of texting has advanced so much since it had started †¢ There are so many ways to text to each other Paragraph 1 †¢ People have found innovative ways of texting with using rebus abbreviation, this is where a name or a word is represented by a picture or pictures suggesting its syllables †¢ They are like puzzles †¢ Punctuation marks and brackets are used to show emotions. We can see this in transcript 1. 🙂 seen upright they mean nothing but look at them sideways and it is a smiling face †¢ Other symbols and numbers are used like , @, 4, 8. Paragraph 2 †¢ The use of abbreviated and shortened forms of words saves time in texting and also shortens texts to keep mobile bills at a minimum, as seen in transcript 2. People are so fast at texting in abbreviations that the mind boggles at the speed of their thumbs †¢ There is efficiency in the way letters are used, and texters shorten words to a minimum. Paragraph 3 †¢ Inotation is defined as the tone or pitch of the voice in speaking or the way a person is speaking like conveying anger, liveliness, being shy. †¢ Inotation cannot really be well conveyed in texting. †¢ The messages loose the true meaning of the sender like shown in transcript 3. †¢ Sometimes miscommunication happens and feelings are hurt. †¢ It is hard to explain what you are really feeling through texting. When we communicate body language and tone of voice play an important role. †¢ When we try intonating in a text the other person may read something different in the explanation marks you have sent. †¢ Sometimes Imitation in the text can read a double meaning. Paragraph 4 †¢ The use of non standard words in the texting world has become so popular that almost everyone who texts knows the meanings of the abbreviated words. †¢ Also dialect of different places is making its way into texting. †¢ Like we see words such as lush or mint being used in other regions of England. For example in Manchester â€Å"mint† means really good and â€Å"mardy† means moody , in whales â€Å"lush† means very nice and in London â€Å"pukka† means very good. †¢ We see their uses in transcript 4. †¢ The use of slang in texting has also found a place in texting. †¢ To a person who dose not text, the language use must seem foreign. Paragraph 5 †¢ In non standard words a new form of communication has emerged. †¢ For example a text pal is a person that you never talk to or see, but you ju7st text to like a pen pal. Another example is saying things like â€Å"text of the devil† a version of speak of the devil. The way we would use these new words can be seen in transcript 5. †¢ A whole new dictionary would have to written just to accommodate all the new words that have sprung up in this new texting age. Paragraph 6 †¢ The use of incomplete sentences or the use of phrases instead of using the proper grammatical sentences is common in texting, †¢ When you use incomplete sentences you fail to express the total meaning. †¢ Shortening of sentence or use of phrases is fine while texting because of maybe, the lack of time or space texting like in transcript 6. †¢ Phrases also are a part of this new language and most teenagers will know hundreds of them. It is like a second language for them. †¢ These are a few popular phrases in texting; BFF (best friends forever), FYI (for your information), IDC (I don’t care), JC (just chilling), GAL (get a life). †¢ We can see how to use a phrase in transcript 7. Paragraph 7 †¢ I think one of the similarities between texting and speech is that we text the way we speak. †¢ We text the words the way they sound, not like the way they are spelt in the dictionary. †¢ We text phonetically. †¢ Spelling goes out of the window. †¢ The content of the text that is written is a different matter. We see that many words are taken out of the sentences to mak e them shorter and if we read these shortened text messages out loud we would sound like cavemen in cartoons or like Tarzan saying,† me Tarzan, you Jane. †¢ Even I say to my mum SOZ (sorry), LOL (laugh out loud) and CBA (can’t be asked). †¢ My mum is always shouting at me to speak proper English Paragraph 8 †¢ When it comes to actual writing students are using text language instead of proper English in their studies †¢ This is creating problems in our schools, colleges and the workplace †¢ Texting is a distraction and stops you from paying attention to what is happening around you. Students are found texting in classes instead of paying attention of what is happening in the classroom. †¢ Texting has its negatives but it also has its positives. †¢ It keeps people connected to each other. †¢ Testers are always updating each other of what is going on there and then. †¢ Sometimes it is better to quietly text than to talk in a public place and disturb others. †¢ For every new technology there will always be positives and negatives. †¢ I think there is always a middle way in which we can use texting and not go to the extreme.

Thursday, November 28, 2019

Black Panther Essays - Panthera, Black Panther, Leopard,

Black Panther The Black Panther The black panther is a type of leopard. It belongs to the family Felidae, and is classified as Panthera pardus. Black panthers are found in Africa, Asia Minor, Middle East India, Pakistan, China, Siberia, and Southeast Asia. The male panther is called a panther, a female panther is called a panthress, and an immature panther is called a cub. The physical characteristics of the Black Panther vary. They are covered with black fur, with some darker areas that you can only see in certain lighting. The color of the panther depends on its location. The black panther has a long dark tail to go with its dark body. It has compact muscles and walks with a flowing movement in its limbs. The height of a male black panther is about nine feet, and a female is seven to eight feet tall. The male weight can weigh as little as 85 pounds and up to 198 pounds, but a female can weigh from 75 to 130 pounds. The life span of a black panther is about twenty-one years. It is considered a small cub for six months, and a large cub from six months to one and a half years. A cub remains with its mother until it is 18 to 24 months old and then it leaves to establish a territory of its own. A panther is considered a sub-adult from 1.5 to 3.5 years and a prime adult from 3.5 years to nine years. An old adult ranges in age from nine years up till around age twenty-one. The black panther's habitat is usually the jungle, where its black coat provides a great camouflage. The panther usually lives in or near a tree, and is a great tree climber. After the panther kills his prey, he takes it to the top of the tree to keep it away from other predators. The black panther has a few enemies. The lions are not fond of them, and will kill them when they get the chance. Baboons also can attack and drive the panther away and kill it, and the hyenas also drive the panthers away, and kill them so they can have the dead prey for themselves. But the panther is a very good fighter. During the day, the panther will rest, feed, hunt, walk, and court. During the night he will feed, hunt, court, and travel. The panthers are near the top of the food chain. Even though they lack the cheetah's speed, they have their own gifts. The panther is very graceful, powerful and very aggressive. The panther has an incredible jumping ability. It can jump up to twenty feet and ten feet high without much difficulty. The panther stalks his prey and waits until it is near, and then attacks. Then he grabs his prey by the neck and suffocates it until it dies. The black panther is a carnivore and his diet consists of animals as small as mice, to predators twice his size. During the day the female that is raising the young will sometimes hunt. Panthers don't have a mating season but usually mate during January and February. The panther usually abandons his usual habitat to mate. During mating, the male grasps the female's skin and holds it until mating process is complete. Then the female turns and swats the male away with her forepaw. After mating, the pair splits up and the female cares for the resulting cubs. One source says, ?the average rate of producing cubs is 15%,? so a lot of females are unsuccessful in having cubs. The panther usually lives a solitary life, except for his cubs and mate. The panther is very defensive of his territory, but respects other panther territories. There is no hierarchy, and no king. Is the black panther in danger of becoming extinct? The Black Panther is nowhere near being extinct. The Convention in the Trade of Endangered Species says ?the African Leopard, by no means, is to be considered an endangered species, with a population of about 700,00 in Africa.? Also a research team of scientists stated that ? 322 leopards could be hunted annually as trophy animals in Namibia without decimating the

Monday, November 25, 2019

Manufacturing and Process Essay Example

Manufacturing and Process Essay Example Manufacturing and Process Paper Manufacturing and Process Paper Multiple Choice Questions 1. An organizations process strategy a. will have long-run impact on efficiency and production b. is the same as its transformation strategy c. must meet various constraints, including cost d. is concerned with how resources are transformed into goods and services e. all of the above are true A job shop is an example of a(n) a. repetitive process b. continuous process c. line process d. intermittent process e. specialized process Three types of process strategies are: a. goods, services, and hybrids b. manual, automated, and service c. rocess focus, repetitive focus, and product focus d. modular, continuous, and technological Which of the following industries is likely to have low equipment utilization? a. auto manufacturing b. beer making c. television manufacturing d. hospitals A product focused process is commonly used to produce a. high-volume, high-variety products b. low-volume, high-variety products c. high-volume, low-variety products d. low-variety products at either high- or low-volume Which one of the following products is most likely made in a job shop environment? a. custom furniture b. graphite pencils c. television sets d. igarettes Which of the following products is likely to be assembled on a repetitive process line? a. automobiles b. personal computers c. dishwashers d. television sets e. all of the above An assembly line is an example of a a. product focused process b. customized process c. repetitive process d. specialized process 2. 3. 4. 5. 6. 7. 8. 1 9. Which of the following transformations generally has the highest equipment utilization? a. process focused process b. repetitive process c. product focused process d. specialized process Which of the following is false regarding repetitive processes? . They use modules. b. They allow easy switching from one product to the other. c. They are the classic assembly lines. d. They have more structure and less flexibility than a job shop layout. e. They include the assembly of basically all automobiles. Mass customization, when done correctly, a. increases pressure on supply-chain performance b. helps eliminate the guesswork that comes with sales forecasting c. drives down inventories d. increases pressure on scheduling e. all of the above Which of the following characteristics best describes process focus? a. low volume, high variety b. inished goods are usually made to order c. processes are designed to perform a wide variety of activities d. all of the above are true Service blueprintin g a. provides the basis to negotiate prices with suppliers b. mimics the way people communicate c. determines the best time for each step in the process d. focuses on the providers interaction with the customer Which of the following characteristics best describes repetitive focus? a. uses modules b. falls between product and process focus c. widely used for the assembly of automobiles d. all of the above A drawing of the movement of material, or people is a a. low diagram b. process chart c. service blueprint d. process map Strategies for improving productivity in services are a. separation, self-service, automation, and scheduling b. lean production, strategy-driven investments, automation, and process focus c. reduce inventory, reduce waste, reduce inspection, and reduce rework d. high interaction, mass customization, service factory, and Just-in-time In mass service and service factory quadrants of the service process matrix, the operations manager could focus on all of the following except a. automation b. standardization c. ight quality control 2 10. 11. 12. 13. 14. 15. 16. 17. d. customization 18. Which of the following is tru e regarding opportunities to improve service processes? a. Automation can do little to improve service processes, because services are so personal. b. Layout is of little consequence, since services seldom use an assembly line. c. If a work force is strongly committed, it need not be cross-trained and flexible. d. All of the above are true. e. None of the above are true. a. b. c. d. 20. Which of the following are typical of process control systems? They have sensors. The digitized data are analyzed by computer, which generates feedback. Their sensors take measurements on a periodic basis. all of the above 19. Which of the following is true regarding vision systems? a. They are consistently accurate. b. They are modest in cost. c. They do not become bored. d. All of the above are true. a. b. c. d. The use of information technology to monitor and control a physical process is known as process control computer-aided design information numeric control numeric control Which of the following statements regarding automated guided vehicles is false? They are used to move workers from one side of the plant to the other. They are used to deliver meals in hospitals and jails. They are an alternative to monorails, conveyors, and robots in automated material handling. They are electronically guided and controlled carts used to move parts and equipment. 21. 22. a. b. c. d. 23. Automatic placement and withdrawal of parts and products into and from designated places in a warehouse describes a. AGV b. CAD/CAM c. CIM d. ASRS Computer-integrated manufacturing (CIM) includes manufacturing systems that have a. omputer-aided design, a flexible manufacturing system, inventory control, warehousing and shipping integrated b. transaction processing, management information systems, and decision support systems integrated c. automated guided vehicles, robots, and process control d. robots, automated guided vehicles, and transfer equipment Which one of the following technologies is used only for material handling, not actual production or assembly? a. robots b. CNC c. CAD d. AGVs 3 24. 25. 26. A system using an automated work cell controlled by electronic signals from a common centralized computer facility is called a(n) a. daptive control system b. robotics c. flexible manufacturing system (FMS) d. automatic guided vehicle (AGV) system a. b. c. d. Operators simply load new programs, as necessary, to produce different products describes automated guided vehicles flexible manufacturing systems (FMS) vision systems process control Examples of the impact of technology on services include debit cards supermarket scanners electronic hotel key/lock systems all of the above Process reengineering is the fundamental rethinking and adical redesign of business processes tries to bring about dramatic improvements in performance focuses on activities that cross functional lines all of the above 27. 28. a. b. c. d. 29. a. b. c. d. 30. Making environmentally sound products through efficient processes a. is unprofitable, as long as recyclable materials prices are soft b. is known as lean manufacturing c. can still be profitable d. is easier for repetitive processes than for product-focused processes Which of the following is true regarding the concept of flexibility? a. It is the ability to change production rates with little penalty in time, cost, or customer value. . It can be accomplished with sophisticated electronic equipment. c. It may involve modular, movable, even cheap equipment. d. All of the above are true. Flexibility can be achieved with a. moveable equipment b. inexpensive equipment c. sophisticated electronic equipment d. modular equipment e. all of the above 31. 32. 4 Chapter 7: Multiple Choice Answers 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. e d c d c a e c c b e d d d a a 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. d e d d a a d a d c b d d c d e 5

Thursday, November 21, 2019

Racism in Disney Movies Research Paper Example | Topics and Well Written Essays - 1750 words

Racism in Disney Movies - Research Paper Example ed lyrics are vastly clear as they depict the oriental society as crude and completely barbaric by citing erroneous and negative accounts of their culture. Although, the song has now been revised for its offensive lyrics, but the fact that it was there to begin with clearly shows the bad stereotypes and how it was propagated in earlier versions of the film. (Bernstein & Studlar, 1997, p. 17) This deconstruction of oriental culture within western cinema was first brought to notice by Edward Said’s (1978) theory of ‘Orientalism’, according to which Orientals were portrayed as crude and savages in the media to promote the superiority of western ideology. The racist portrayal was not only restricted to the Middle East, but the general depiction of the minorities was indeed quite warped. For instance, in Pocahontas, the story has been turned into a heartwarming romance between an Indian girl and an Englishman, with the story ending on a heartwarmingly happier note, wit h both ethnicities finally living in harmony. On the other hand, the real events were not as delightful as the one shown in the movie, which clearly represents how misguiding these movies are for the children. In the end, the Indian servant and Pocahontas’ union with her white lover clearly shows the west’s perceived superiority, which is blatantly and colorfully projected on to the big screens for the young children. In reality, the Native Indians were systematically wiped out to help the white faction gain the upper hand over the region; a painful historical fact that was skillfully masked in the film. Furthermore, in several instances the Native Indians are openly referred to as savages, which is further detrimental in establishing an image for the child. Even in Peter Pan, the Native Indians are... As a matter of fact, the obsession with the notion of ‘happily ever after’ is not only contained within the west, but all over the world young girls and boys are ardent fans of Disney motion pictures. Although, parents may not find anything bothersome regarding the content that they are feeding their children through the movies, but analysts have discovered a number of nefarious undertones within the innocent storylines of their movies. Although, Disney has tried their best to remove all the aforementioned racist lyrics but the fact that they are still present on previous copies of the movies continues to make it alarming for parents. Even though, some parents have deemed the presence of these racist elements as completely unimportant, but it is an undeniable fact that children are highly impressionable and thus are greatly influenced by the things they see on television. Disney movies may cause them to establish erroneous pre-conceived notions about people who seem to possess the traits that are present in all the unfavorable cartoon characters. They may hear a black man speaking and immediately make the association that they are lazy and idle like the crows in Dumbo and even believe that all Native Indians are crude and have bad grammar. These pre-conceived notions will only make it difficult for them to associate or identify themselves with other races on a human level. Therefore, it is not healthy for parents to continually expose their child to such content as it narrows down the child’s horizon and he continues to view foreignness in a negative light and children belonging to minority groups may grow up to view western values and cultures as completely superior to their own.

Wednesday, November 20, 2019

Physical Structure of a Neuron (Neuron-to-Neuron Communication) Essay

Physical Structure of a Neuron (Neuron-to-Neuron Communication) - Essay Example Examples of the ten different body systems are the circulatory system, the digestive system, the integumentary (skin), the skeletal, the muscular, the digestive, respiratory, the reproductive, the endocrine and the nervous system. All these systems work together in harmony that makes the human body a complex whole. Vitamins, nutrients and minerals are necessary for all the systems to work perfectly to attain and maintain good health. Regular exercise, proper diet and a healthy lifestyle all contribute and work together to make these systems function properly. Perhaps one of the most complex body system is the nervous system. This is the set of body system that is composed of very highly-specialized cells involved with the receiving and its transmission of information from both internal and external stimuli. In other words, this system is responsible for the receipt and relay of various communications within the human body. It is a complex system composed of the brain, the spinal colu mn and the nerves. The nervous system also includes the special sense organs of the eyes, the ears, nose, taste buds and skin. In effect, the nervous system is the control system of the human body. It is a sort of a command center made up of the central nervous system (brain and spinal column) and the peripheral nervous system (the spinal nerves and the twelve cranial nerves). ... It is also known by other terms like biopsychology, behavioural neuroscience and physiological psychology but the ultimate aim is the same, which is to better understand how a human nervous system and its components explain our behaviours as well as various ailments. In this regard it is very important to study and understand how biological processes affect not only behaviours and emotions but the entire cognitive process as well. For this purpose, it is crucial to explore how a person's actions are greatly influenced by the brain, neurotransmitters and the nervous system. It is therefore necessary to study the complex inter-relationship between anatomy and physiology in the complex human process of biological growth and development. This very fascinating field of study has already yielded some useful insights into how the nervous system affects and influences the entire body and indeed the whole person through a discovery of the role of chemical transmission by neurotransmitters wit hin this system in relay of information from various stimuli (Wickens, 2005, p. 11). It is in this connection that this paper is discussing the basic unit of the nervous system which is the neuron and how the neurons in turn transmit crucial information between them. An understanding of a neuron's physical structure is a necessary adjunct to the process of understanding in the entire chemical transmission process. An example of a neurotransmitter is dopamine; too little of it causes Parkinson's and Alzheimer's but too much of it is associated with psychological disorders like dyslexia and schizophrenia. Physical Structure of a Neuron – the neuron is a single cell which is the basic unit or building block of the entire nervous system.

Monday, November 18, 2019

Supermarket Company Strategic Analysis Assignment

Supermarket Company Strategic Analysis - Assignment Example The report below provides an insight into the supermarket company, Tesco, with emphasis on its external environment analysis and company's analysis of resources, competence and culture. Two future strategic options are suggested in regards to the resources based strategies. Tesco is one of the largest food retailers in the world, operating around 2,318 stores and employing over 326,000 people. It provides online services through its subsidiary, Tesco.com. The UK is the company's largest market, where it operates under four banners of Extra, Superstore, Metro and Express. The company sells almost 40,000 food products, including clothing and other non-food lines. The company's own-label products (50 percent of sales) are at three levels, value, normal and finest. As well as convenience produce, many stores have gas stations, becoming one of Britain's largest independent petrol retailers. Other retailing services offered include Tesco Personal Finance. Operating in a globalised environment with stores around the globe, Tesco's performance is highly influenced by the political and legislative conditions of these countries, including the European Union (EU). For employment legislations, the government encourages retailers to provide a mix of job opportunities from flexible, lower-paid and locally-based jobs to highly-skilled, higher-paid and centrally-located jobs (Finch, 2005). Also to meet the demand from population segments such as students, working parents and senior citizens. Tesco understands that retailing has a great impact on jobs and people factors (new store developments are often seen as destroying other jobs in the retail sector as traditional stores go out of business or are forced to cut costs to compete), being an essentially local and labour-intensive sector. Tesco employs large numbers of; student, disabled and elderly workers, often paying them lower rates. In an industry with a typically high staff turnover, these workers offer a higher level of loyalty and therefore represent desirable candidates. Economical Factors Economic factors are of concern to Tesco, because they are likely to influence demand, costs, prices and profits. One of the most influential factors on the economy is high unemployment levels, which decreases the effective demand for many goods, adversely affecting the demand required to produce such goods. These economic factors are largely outside the control of the company, but their effects on performance and the marketing mix can be profound. Although international business is still growing (Appendix A), and is expected to contribute greater amounts to Tesco's profits over the next few years, the company is still highly dependent on the UK market. Hence, Tesco would be badly affected by any setback in the UK food market and are out in the open to market concentration risks. Social/Cultural Factors Current trends indicate that British customers have moved towards 'one-stop' and 'bulk' shopping, which is due to a variety of changes in social trends. Tesco have, therefore, increased the amount of non-food items available for sale. Demographic changes such as the aging population, an increase in female workers and a decline in home meal preparation mean that UK retailers are also focusing on

Friday, November 15, 2019

H.264 Video Streaming System on Embedded Platform

H.264 Video Streaming System on Embedded Platform ABSTRACT The adoption of technological products like digital television and video conferencing has made video streaming an active research area. This report presents the integration of a video streamer module into a baseline H.264/AVC encoder running a TMSDM6446EVM embedded platform. The main objective of this project is to achieve real-time streaming of the baseline H.264/AVC video over a local area network (LAN) which is a part of the surveillance video system. The encoding of baseline H.264/AVC and the hardware components of the platform are first discussed. Various streaming protocols are studied in order to implement the video streamer on the DM6446 board. The multi-threaded application encoder program is used to encode raw video frames into H.264/AVC format onto a file. For the video streaming, open source Live555 MediaServer was used to stream video data to a remote VLC client over LAN. Initially, file streaming was implemented from PC to PC. Upon successfully implementation on PC, the video streamer was ported to the board. The steps involved in porting the Live555 application were also described in the report. Both unicast and multicast file streaming were implemented in the video streamer. Due to the problems of file streaming, the live streaming approach was adopted. Several methodologies were discussed in integrating the video streamer and the encoder program. Modification was made both the encoder program and the Live555 application to achieve live streaming of H.264/AVC video. Results of both file and live streaming will be shown in this report. The implemented video streamer module will be used as a base module of the video surveillance system. Chapter 1: Introduction 1.1. Background Significant breakthroughs have been made over the last few years in the area of digital video compression technologies. As such applications making use of these technologies have also become prevalent and continue to be of active research topics today. For example, digital television and video conferencing are some of the applications that are now commonly encountered in our daily lives. One application of interest here is to make use of the technologies to implement a video camera surveillance system which can enhance the security of consumers business and home environment. In typical surveillance systems, the captured video is sent over a cable networks to be monitored and stored at remote stations. As the captured raw video contains large amount of data, it will be of advantage to first compress the data by using a compression technique before it is transferred over the network. One such compression technique that is suitable for this type of application is the H.264 coding standard. H.264 coding is better than the other coding technique for video streaming as it is more robust to data losses and coding efficiency, which are important factors when streaming is performed over a shared Local Area Network. As there is an increasing acceptance of H.264 coding and the availability of high computing power embedded systems, digital video surveillance system based on H.264 on embedded platform is hence a feasible and a potentially more cost-effective system. Implementing a H.264 video streaming system on an embedded platform is a logical extension of video surveillance systems which are still typical implemented using high computing power stations (e.g. PC). In a embedded version, a Digital Signal Processor (DSP) forms the core of the embedded system and executes the intensive signal processing algorithm. Current embedded systems typical also include network features which enable the implementation of data streaming applications. To facilitate data streaming, a number of network protocol standards have also being defined, and are currently used for digital video applications. 1.2. Objective and Scope The objective of this final year project is to implement a video surveillance system based on the H.264 coding standard running on an embedded platform. Such a system contains extensive scopes of functionalities and would require extensive amount of development time if implemented from scratch. Hence this project is to focus on the data streaming aspect of a video surveillance system. After some initial investigation and experimentation, it is decided to confine the main scope of the project to developing a live streaming H.264 based video system running on a DM6446 EVM development platform. The breakdown of the work to be progressive performed are then identified as follows: 1. Familiarization of open source live555 streaming media server Due to the complexity of implementing the various standard protocols needed for multimedia streaming, the live555 media server program is used as a base to implement the streaming of the H.264.based video data. 2. Streaming of stored H.264 file over the network The live555 is then modified to support streaming of raw encoded H.264 file from the DM6446 EVM board over the network. Knowledge of H.264 coding standard is necessary in order to parse the file stream before streaming over the network. 3. Modifying a demo version of an encoder program and integrating it together with live555 to achieve live streaming The demo encoder was modified to send encoded video data to the Live555 program which would do the necessary packetization to be streamed over the network. Since data is passed from one process to another, various inter-process communication techniques were studied and used in this project. 1.3. Resources The resources used for this project are as follows: 1. DM6446 (DaVinciâ„ ¢) Evaluation Module 2. SWANN C500 Professional CCTV Camera Solution 400 TV Lines CCD Color Camera 3. LCD Display 4. IR Remote Control 5. TI Davinci demo version of MontaVista Linux Pro v4.0 6. A Personal Workstation with Centos v5.0 7. VLC player v.0.9.8a as client 8. Open source live555 program (downloaded from www.live555.com) The system setup of this project is shown below: 1.4. Report Organization This report consists of 7 chapters. Chapter 1 introduces the motivation behind embedded video streaming system and defines the scope of the project. Chapter 2 illustrates the video literature review of the H.264/AVC video coding technique and the various streaming protocols which are to be implemented in the project. Chapter 3 explains the hardware literature review of the platform being used in the project. The architecture, memory management, inter-process communication and the software tools are also discussed in this chapter. Chapter 4 explains the execution of the encoder program of the DM6446EVM board. The interaction of the various threads in this multi-threaded application is also discussed to fully understand the encoder program. Chapter 5 gives an overview of the Live555 MediaServer which is used as a base to implement the video streamer module on the board. Adding support to unicast and multicast streaming, porting of live555 to the board and receiving video stream on remote VCL client are explained in this chapter. Chapter 6 explains the limitations of file streaming and moving towards live streaming system. Various integration methodologies and modification to both encoder program and live555 program are shown as well. Chapters 7 summarize the implementation results of file and live streaming, analysis the performance of these results. Chapter 8 gives the conclusion by stating the current limitation and problems, scope for future implementation. Chapter 2: Video Literature Review 2.1. H.264/AVC Video Codec Overview H.264 is the most advanced and latest video coding technique. Although there are many video coding schemes like H.26x and MPEG, H.264/AVC made many improvements and tools for coding efficiency and error resiliency. This chapter briefly will discuss the network aspect of the video coding technique. It will also cover error resiliency needed for transmission of video data over the network. For a more detailed explanation of the H.264/AVC, refer to appendix A. 2.1.1. Network Abstraction Layer (NAL) The aim of the NAL is to ensure that the data coming from the VCL layer is â€Å"network worthy† so that the data can be used for numerous systems. NAL facilitates the mapping of H.264/AVC VCL data for different transport layers such as: * RTP/IP real-time streaming over wired and wireless mediums * Different storage file formats such as MP4, MMS, AVI and etc. The concepts of NAL and error robustness techniques of the H.264/AVC will be discussed in the following parts of the report. NAL Units The encoded data from the VCL are packed into NAL units. A NAL unit represents a packet which makes up of a certain number of bytes. The first byte of the NAL unit is called the header byte which indicates the data type of the NAL unit. The remaining bytes make up the payload data of the NAL unit. The NAL unit structure allows provision for different transport systems namely packet-oriented and bit stream-oriented. To cater for bit stream-oriented transport systems like MPEG-2, the NAL units are organized into byte stream format. These units are prefixed by a specific start code prefix of three bytes which is namely 0x000001. The start code prefix indicates and the start of each NAL units and hence defining the boundaries of the units. For packet-oriented transport systems, the encoded video data are transported via packets defined by transport protocols. Hence, the boundaries of the NAL units are known without having to include start code prefix byte. The details of packetization of NAL units will be discussed in later sections of the report. NAL units are further categorized into two types: * VCL unit: comprises of encoded video data  · Non-VCL unit: comprises of additional information like parameter sets which is the important header information. Also contains supplementary enhancement information (SEI) which contains the timing information and other data which increases the usability of the decoded video signal. Access units A group of NAL units which adhere to a certain form is called a access unit. When one access unit is decoded, one decoded picture is formed. In the table 1 below, the functions of the NAL units derived from the access units are explained. Data/Error robustness techniques H.264/AVC has several techniques to mitigate error/data loss which is an essential quality when it comes to streaming applications. The techniques are as follows:  · Parameter sets: contains information that is being applied to large number of VCL NAL units. It comprises of two kinds of parameter sets: Sequence Parameter set (SPS) : Information pertaining to sequence of encoded picture Picture Parameter Set (PPS) : Information pertaining to one or more individual pictures The above mentioned parameters hardly changes and hence it need not be transmitted repeatedly and saves overhead. The parameter sets can be sent â€Å"in-band† which is carried in the same channel as the VCL NAL units. It can also be sent â€Å"out-of-band† using reliable transport protocol. Therefore, it enhances the resiliency towards data and error loss.  · Flexible Macroblock Ordering (FMO) FMO maps the macroblocks to different slice groups. In the event of any slice group loss, missing data is masked up by interpolating from the other slice groups.  · Redundancy Slices (RS) Redundant representation of the picture can be stored in the redundant slices. If the loss of the original slice occurs, the decoder can make use of the redundant slices to recover the original slice. These techniques introduced in the H.264/AVC makes the codec more robust and resilient towards data and error loss. 2.1.2. Profiles and Levels A profile of a codec is defined as the set of features identified to meet a certain specifications of intended applications For the H.264/AVC codec, it is defined as a set of features identified to generate a conforming bit stream. A level is imposes restrictions on some key parameters of the bit stream. In H.264/AVC, there are three profiles namely: Baseline, Main and Extended. 5 shows the relationship between these profiles. The Baseline profile is most likely to be used by network cameras and encoders as it requires limited computing resources. It is quite ideal to make use of this profile to support real-time streaming applications in a embedded platform. 2.2. Overview of Video Streaming In previous systems, accessing video data across network exploit the ‘download and play approach. In this approach, the client had to wait until the whole video data is downloaded to the media player before play out begins. To combat the long initial play out delay, the concept of streaming was introduced. Streaming allows the client to play out the earlier part of the video data whilst still transferring the remaining part of the video data. The major advantage of the streaming concept is that the video data need not be stored in the clients computer as compared to the traditional ‘download and play approach. This reduces the long initial play out delay experienced by the client. Streaming adopts the traditional client/server model. The client connects to the listening server and request for video data. The server sends video data over to the client for play out of video data. 2.2.1. Types of Streaming There are three different types of streaming video data. They are pre-recorded/ file streaming, live/real-time streaming and interactive streaming. * Pre-recorded/live streaming: The encoded video is stored into a file and the system streams the file over the network. A major overhead is that there is a long initial play out delay (10-15s) experienced by the client. * Live/real-time streaming: The encoded video is streamed over the network directly without being stored into a file. The initial play out delay reduces. Consideration must be taken to ensure that play out rate does not exceed sending rate which may result in jerky the picture. On the other hand, if the sending rate is too slow, the packets arriving at the client may be dropped, causing in a freezing the picture. The timing requirement for the end-to-end delay is more stringent in this scenario. * Interactive streaming: Like live streaming, the video is streamed directly over the network. It responds to users control input such as rewind, pause, stop, play and forward the particular video stream. The system should respond in accordance to those inputs by the user. In this project, both pre-recorded and live streaming are implemented. Some functionality of interactive streaming controls like stop and play are also part of the system. 2.2.2. Video Streaming System modules Video Source The intent of the video source is to capture the raw video sequence. The CCTV camera is used as the video source in this project. Most cameras are of analogue inputs and these inputs are connected to the encoding station via video connections. This project makes use of only one video source due to the limitation of the video connections on the encoding station. The raw video sequence is then passed onto the encoding station. Encoding Station The aim of the encoding station digitized and encodes the raw video sequence into the desired format. In the actual system, the encoding is done by the DM6446 board into the H.264/AVC format. Since the hardware encoding is CPU intensive, this forms the bottleneck of the whole streaming system. The H.264 video is passed onto the video streamer server module of the system. Video Streaming and WebServer The role of the video streaming server is to packetize the H.264/AVC to be streamed over the network. It serves the requests from individual clients. It needs to support the total bandwidth requirements of the particular video stream requested by clients. WebServer offers a URL link which connects to the video streaming server. For this project, the video streaming server module is embedded inside DM6446 board and it is serves every individual clients requests. Video Player The video player acts a client connecting to and requesting video data from the video streaming server. Once the video data is received, the video player buffers the data for a while and then begins play out of data. The video player used for this project is the VideoLAN (VLC) Player. It has the relevant H.264/AVC codec so that it can decode and play the H264/AVC video data. 2.2.3. Unicast VS Multicast There are two key delivery techniques employed by streaming media distribution. Unicast transmission is the sending of data to one particular network destination host over a packet switched network. It establishes two way point-to-point connection between client and server. The client communicates directly with the server via this connection. The drawback is that every connection receives a separate video stream which uses up network bandwidth rapidly. Multicast transmission is the sending of only one copy of data via the network so that many clients can receive simultaneously. In video streaming, it is more cost effective to send single copy of video data over the network so as to conserve the network bandwidth. Since multicast is not connection oriented, the clients cannot control the streams that they can receive. In this project, unicast transmission is used to stream encoded video over the network. The client connects directly to the DM6446 board where it gets the encoded video data. The project can easily be extended to multicast transmission. 2.3. Streaming Protocols When streaming video content over a network, a number of network protocols are used. These protocols are well defined by the Internet Engineering Task Force (IETF) and the Internet Society (IS) and documented in Request for Comments (RFC) documents. These standards are adopted by many developers today. In this project, the same standards are also employed in order to successfully stream H.264/AVC content over a simple Local Area Network (LAN). The following sections will discuss about the various protocols that are studied in the course of this project. 2.3.1. Real-Time Streaming Protocol (RTSP) The most commonly used application layer protocol is RTSP. RTSP acts a control protocol to media streaming servers. It establishes connection between two end points of the system and control media sessions. Clients issue VCR-like commands like play and pause to facilitate the control of real-time playback of media streams from the servers. However, this protocol is not involved in the transport of the media stream over the network. For this project, RTSP version 1.0 is used. RTSP States Like the Hyper Text Transfer Protocol (HTTP), it contains several methods. They are OPTIONS, DESCRIBE, SETUP, PLAY, PAUSE, RECORD and TEARDOWN. These commands are sent by using the RTSP URL. The default port number used in this protocol is 554. An example of such as URL is: rtsp://  · OPTIONS: An OPTIONS request returns the types of request that the server will accept. An example of the request is: OPTIONS rtsp://155.69.148.136:554/test.264 RTSP/1.0 CSeq: 1rn User-agent: VLC media Player The CSeq parameter keeps track of the number of request send to the server and it is incremented every time a new request is issued. The User-agent refers to the client making the request. * DESCRIBE: This method gets the presentation or the media object identified in the request URL from the server. An example of such a request: DESCRIBE rtsp://155.69.148.138:554/test.264 RTSP/1.0 CSeq: 2rn Accept: application/sdprn User agent: VLC media Player The Accept header is used to describe the formats understood by the client. All the initialization of the media resource must be present in the DESCRIBE method that it describes.  · SETUP: This method will specify the mode of transport mechanism to be used for the media stream. A typical example is: SETUP rtsp://155.69.148.138:554/test.264 RTSP/1.0 CSeq: 3rn Transport: RTP/AVP; unicast; client_port = 1200-1201 User agent: VLC media Player The Transport header specifies the transport mechanism to be used. In this case, real-time transport protocol is used in a unicast manner. The relevant client port number is also reflected and it is selected randomly by the server. Since RTSP is a stateful protocol, a session is created upon successful acknowledgement to this method.  · PLAY: This method request the server to start sending the data via the transport mechanism stated in the SETUP method. The URL is the same as the other methods except for: Session: 6 Range: npt= 0.000- rn The Session header specifies the unique session id. This is important as server may establish various sessions and this keep tracks of them. The Range header positions play time to the beginning and plays till the end of the range. * PAUSE: This method informs the server to pause sending of the media stream. Once the PAUSE request is sent, the range header will capture the position at which the media stream is paused. When a PLAY request is sent again, the client will resume playing from the current position of the media stream as specified in the range header. RSTP Status Codes Whenever the client sends a request message to the server, the server forms a equivalent response message to be sent to the client. The response codes are similar to HTTP as they are both in ASCII text. They are as follows: 200: OK 301: Redirection 405: Method Not Allowed 451: Parameter Not Understood 454: Session Not Found 457: Invalid Range 461: Unsupported Transport 462: Destination Unreachable These are some of the RTSP status codes. There are many others but the codes mentioned above are of importance in the context of this project. 2.3.2. Real-time Transport Protocol (RTP) RTP is a defined packet structure which is used for transporting media stream over the network. It is a transport layer protocol but developers view it as a application layer protocol stack. This protocol facilitates jitter compensation and detection of incorrect sequence arrival of data which is common for transmission over IP network. For the transmission of media data over the network, it is important that packets arrive in a timely manner as it is loss tolerant but not delay tolerant. Due to the high latency of Transmission Control Protocol in establishing connections, RTP is often built on top of the User Datagram Protocol (UDP). RTP also supports multicast transmission of data. RTP is also a stateful protocol as a session is established before data can be packed into the RTP packet and sent over the network. The session contains the IP address of the destination and port number of the RTP which is usually an even number. The following section will explain about the packet structure of RTP which is used for transmission. RTP Packet Structure The below shows a RTP packet header which is appended in front of the media data.s The minimum size of the RTP header is 12 bytes.. Optional extension information may be present after the header information. The fields of the header are:  · V: (2 bits) to indicate the version number of the protocol. Version used in this project is 2.  · P (Padding): (1 bit) to indicate if there padding which can be used for encryption algorithm  · X (Extension): (1 bit) to indicate if there is extension information between header and payload data.  · CC (CSRC Count) : (4 bits) indicates the number of CSRC identifiers  · M (Marker): (1 bit) used by application to indicate data has specific relevance in the perspective of the application. The setting for M bit marks the end of video data in this project  · PT (Payload Type): (7 bits) to indicate the type of payload data carried by the packet. H.264 is used for this project  · Sequence number: (16 bits) incremented by one for every RTP packet. It is used to detect packet loss and out of sequence packet arrival. Based on this information, application can take appropriate action to correct them.  · Time Stamp: (32 bits) receivers use this information to play samples at correct intervals of time. Each stream has independent time stamps.  · SSRC: (32 bits) it unique identifies source of the stream.  · CSRC: sources of a stream from different sources are enumerated according to its source IDs. This project does not involve the use of Extension field in the packet header and hence will not be explained in this report. Once this header information is appended to the payload data, the packet is sent over the network to the client to be played. The table below summarizes the payload types of RTP and highlighted region is of interest in this project. Table 2: Payload Types of RTP Packets 2.3.3. RTP Control Protocol (RTCP) RTCP is a sister protocol which is used in conjunction with the RTP. It provides out-of-band statistical and control information to the RTP session. This provides certain Quality of Service (QoS) for transmission of video data over the network. The primary functions of the RTCP are: * To gather statistical information about the quality aspect of the media stream during a RTP session. This data is sent to the session media source and its participants. The source can exploit this information for adaptive media encoding and detect transmission errors. * It provides canonical end point identifiers (CNAME) to all its session participants. It allows unique identification of end points across different application instances and serves as a third party monitoring tool. * It also sends RTCP reports to all its session participants. By doing so, the traffic bandwidth increases proportionally. In order to avoid congestion, RTCP has bandwidth management techniques to only use 5% of the total session bandwidth. RTCP statistical data is sent odd numbered ports. For instance, if RTP port number is 196, then RTCP will use the 197 as its port number. There is no default port number assigned to RTCP. RTCP Message Types RTCP sends several types of packets different from RTP packets. They are sender report, receiver report, source description and bye.  · Sender Report (SR): Sent periodically by senders to report the transmission and reception statistics of RTP packets sent in a period of time. It also includes the senders SSRC and senders packet count information. The timestamp of the RTP packet is also sent to allow the receiver to synchronize the RTP packets. The bandwidth required for SR is 25% of RTCP bandwidth.  · Receiver Report (RR): It reports the QoS to other receivers and senders. Information like highest sequence number received, inter arrival jitter of RTP packets and fraction of packets loss further explains the QoS of the transmitted media streams. The bandwidth required for RR is 75% of the RTCP bandwidth.  · Source Description (SDES): Sends the CNAME to its session participants. Additional information like name, address of the owner of the source can also be sent.  · End of Participation (BYE): The source sends a BYE message to indicate that it is shutting down the stream. It serves as an announcement that a particular end point is leaving the conference. Further RTCP Consideration This protocol is important to ensure that QoS standards are achieved. The acceptable frequencies of these reports are less than one minute. In major application, the frequency may increase as RTCP bandwidth control mechanism. Then, the statistical reporting on the quality of the media stream becomes inaccurate. Since there are no long delays introduced between the reports in this project, the RTCP is adopted to incorporate a certain level of QoS on streaming H.264/AVC video over embedded platform. 2.3.4. Session Description Protocol (SDP) The Session Description Protocol is a standard to describe streaming media initialization parameters. These initializations describe the sessions for session announcement, session invitation and parameter negotiation. This protocol can be used together with RTSP. In the previous sections of this chapter, SDP is used in the DESCRIBE state of RTSP to get sessions media initialization parameters. SDP is scalable to include different media types and formats. SDP Syntax The session is described by attribute/value pairs. The syntax of SDP are summarized in the below. In this project, the use of SDP is important in streaming as the client is VLC Media Player. If the streaming is done via RTSP, then VLC expects a sdp description from the server in order to setup the session and facilitate the playback of the streaming media. Chapter 3: Hardware Literature Review 3.1. Introduction to Texas Instrument DM6446EVM DavinciTM The development of this project based on the DM6446EVM board. It is necessary to understand the hardware and software aspects of this board. The DM6446 board has a ARM processor operating at a clock speed up to 300MHz and a C64x Digital Signal Processor operating at a clock speed of up to 600MHz. 3.1.1. Key Features of DM6446 The key features that are shown in the above are: * 1 video port which supports composite of S video * 4 video DAC outputs: component, RGB, composite * 256 MB of DDR2 DRAM * UART, Media Card interface (SD, xD, SM, MS ,MMC Cards) * 16 MB of non-volatile Flash Memory, 64 MB NAND Flash, 4 MB SRAM * USB2 interface * 10/100 MBS Ethernet interface * Configurable boot load options * IR Remote Interface, real time clock via MSP430 3.1.2. DM6446EVM Architecture The architecture of the DM6446 board is organized into several subsystems. By knowing the architecture of the DM6446, the developer can then design and built his application module on the boards underlining architecture. The shows that DM6446 has three subsystems which are connected to the underlying hardware peripherals. This provides a decoupled architecture which allows the developers to implement his applications on a particular subsystem without having to modify the other subsystems. Some of subsystems are discussed in the next sections. ARM Subsystem The ARM subsystem is responsible for the master control of the DM6446 board. It handles the system-level initializations, configurations, user interface, connectivity functions and control of DSP subsystems. The ARM has a larger program memory space and better context switching capabilities and hence it is more suited to handle complex and multi tasks of the system. DSP Subsystem The DSP subsystem is mainly the encoding the raw captured video frames into the desired format. It performs several number crunching operations in order to achieve the desired compression technique. It works together with the Video Imaging Coprocessor to compress the video frames. Video Imaging Coprocessor (VICP) The VICP is a signal processing library which contains various software algorithms that execute on VICP hardware accelerator. It helps the DSP by taking over computation of varied intensive tasks. Since hardware implementation of number cru H.264 Video Streaming System on Embedded Platform H.264 Video Streaming System on Embedded Platform ABSTRACT The adoption of technological products like digital television and video conferencing has made video streaming an active research area. This report presents the integration of a video streamer module into a baseline H.264/AVC encoder running a TMSDM6446EVM embedded platform. The main objective of this project is to achieve real-time streaming of the baseline H.264/AVC video over a local area network (LAN) which is a part of the surveillance video system. The encoding of baseline H.264/AVC and the hardware components of the platform are first discussed. Various streaming protocols are studied in order to implement the video streamer on the DM6446 board. The multi-threaded application encoder program is used to encode raw video frames into H.264/AVC format onto a file. For the video streaming, open source Live555 MediaServer was used to stream video data to a remote VLC client over LAN. Initially, file streaming was implemented from PC to PC. Upon successfully implementation on PC, the video streamer was ported to the board. The steps involved in porting the Live555 application were also described in the report. Both unicast and multicast file streaming were implemented in the video streamer. Due to the problems of file streaming, the live streaming approach was adopted. Several methodologies were discussed in integrating the video streamer and the encoder program. Modification was made both the encoder program and the Live555 application to achieve live streaming of H.264/AVC video. Results of both file and live streaming will be shown in this report. The implemented video streamer module will be used as a base module of the video surveillance system. Chapter 1: Introduction 1.1. Background Significant breakthroughs have been made over the last few years in the area of digital video compression technologies. As such applications making use of these technologies have also become prevalent and continue to be of active research topics today. For example, digital television and video conferencing are some of the applications that are now commonly encountered in our daily lives. One application of interest here is to make use of the technologies to implement a video camera surveillance system which can enhance the security of consumers business and home environment. In typical surveillance systems, the captured video is sent over a cable networks to be monitored and stored at remote stations. As the captured raw video contains large amount of data, it will be of advantage to first compress the data by using a compression technique before it is transferred over the network. One such compression technique that is suitable for this type of application is the H.264 coding standard. H.264 coding is better than the other coding technique for video streaming as it is more robust to data losses and coding efficiency, which are important factors when streaming is performed over a shared Local Area Network. As there is an increasing acceptance of H.264 coding and the availability of high computing power embedded systems, digital video surveillance system based on H.264 on embedded platform is hence a feasible and a potentially more cost-effective system. Implementing a H.264 video streaming system on an embedded platform is a logical extension of video surveillance systems which are still typical implemented using high computing power stations (e.g. PC). In a embedded version, a Digital Signal Processor (DSP) forms the core of the embedded system and executes the intensive signal processing algorithm. Current embedded systems typical also include network features which enable the implementation of data streaming applications. To facilitate data streaming, a number of network protocol standards have also being defined, and are currently used for digital video applications. 1.2. Objective and Scope The objective of this final year project is to implement a video surveillance system based on the H.264 coding standard running on an embedded platform. Such a system contains extensive scopes of functionalities and would require extensive amount of development time if implemented from scratch. Hence this project is to focus on the data streaming aspect of a video surveillance system. After some initial investigation and experimentation, it is decided to confine the main scope of the project to developing a live streaming H.264 based video system running on a DM6446 EVM development platform. The breakdown of the work to be progressive performed are then identified as follows: 1. Familiarization of open source live555 streaming media server Due to the complexity of implementing the various standard protocols needed for multimedia streaming, the live555 media server program is used as a base to implement the streaming of the H.264.based video data. 2. Streaming of stored H.264 file over the network The live555 is then modified to support streaming of raw encoded H.264 file from the DM6446 EVM board over the network. Knowledge of H.264 coding standard is necessary in order to parse the file stream before streaming over the network. 3. Modifying a demo version of an encoder program and integrating it together with live555 to achieve live streaming The demo encoder was modified to send encoded video data to the Live555 program which would do the necessary packetization to be streamed over the network. Since data is passed from one process to another, various inter-process communication techniques were studied and used in this project. 1.3. Resources The resources used for this project are as follows: 1. DM6446 (DaVinciâ„ ¢) Evaluation Module 2. SWANN C500 Professional CCTV Camera Solution 400 TV Lines CCD Color Camera 3. LCD Display 4. IR Remote Control 5. TI Davinci demo version of MontaVista Linux Pro v4.0 6. A Personal Workstation with Centos v5.0 7. VLC player v.0.9.8a as client 8. Open source live555 program (downloaded from www.live555.com) The system setup of this project is shown below: 1.4. Report Organization This report consists of 7 chapters. Chapter 1 introduces the motivation behind embedded video streaming system and defines the scope of the project. Chapter 2 illustrates the video literature review of the H.264/AVC video coding technique and the various streaming protocols which are to be implemented in the project. Chapter 3 explains the hardware literature review of the platform being used in the project. The architecture, memory management, inter-process communication and the software tools are also discussed in this chapter. Chapter 4 explains the execution of the encoder program of the DM6446EVM board. The interaction of the various threads in this multi-threaded application is also discussed to fully understand the encoder program. Chapter 5 gives an overview of the Live555 MediaServer which is used as a base to implement the video streamer module on the board. Adding support to unicast and multicast streaming, porting of live555 to the board and receiving video stream on remote VCL client are explained in this chapter. Chapter 6 explains the limitations of file streaming and moving towards live streaming system. Various integration methodologies and modification to both encoder program and live555 program are shown as well. Chapters 7 summarize the implementation results of file and live streaming, analysis the performance of these results. Chapter 8 gives the conclusion by stating the current limitation and problems, scope for future implementation. Chapter 2: Video Literature Review 2.1. H.264/AVC Video Codec Overview H.264 is the most advanced and latest video coding technique. Although there are many video coding schemes like H.26x and MPEG, H.264/AVC made many improvements and tools for coding efficiency and error resiliency. This chapter briefly will discuss the network aspect of the video coding technique. It will also cover error resiliency needed for transmission of video data over the network. For a more detailed explanation of the H.264/AVC, refer to appendix A. 2.1.1. Network Abstraction Layer (NAL) The aim of the NAL is to ensure that the data coming from the VCL layer is â€Å"network worthy† so that the data can be used for numerous systems. NAL facilitates the mapping of H.264/AVC VCL data for different transport layers such as: * RTP/IP real-time streaming over wired and wireless mediums * Different storage file formats such as MP4, MMS, AVI and etc. The concepts of NAL and error robustness techniques of the H.264/AVC will be discussed in the following parts of the report. NAL Units The encoded data from the VCL are packed into NAL units. A NAL unit represents a packet which makes up of a certain number of bytes. The first byte of the NAL unit is called the header byte which indicates the data type of the NAL unit. The remaining bytes make up the payload data of the NAL unit. The NAL unit structure allows provision for different transport systems namely packet-oriented and bit stream-oriented. To cater for bit stream-oriented transport systems like MPEG-2, the NAL units are organized into byte stream format. These units are prefixed by a specific start code prefix of three bytes which is namely 0x000001. The start code prefix indicates and the start of each NAL units and hence defining the boundaries of the units. For packet-oriented transport systems, the encoded video data are transported via packets defined by transport protocols. Hence, the boundaries of the NAL units are known without having to include start code prefix byte. The details of packetization of NAL units will be discussed in later sections of the report. NAL units are further categorized into two types: * VCL unit: comprises of encoded video data  · Non-VCL unit: comprises of additional information like parameter sets which is the important header information. Also contains supplementary enhancement information (SEI) which contains the timing information and other data which increases the usability of the decoded video signal. Access units A group of NAL units which adhere to a certain form is called a access unit. When one access unit is decoded, one decoded picture is formed. In the table 1 below, the functions of the NAL units derived from the access units are explained. Data/Error robustness techniques H.264/AVC has several techniques to mitigate error/data loss which is an essential quality when it comes to streaming applications. The techniques are as follows:  · Parameter sets: contains information that is being applied to large number of VCL NAL units. It comprises of two kinds of parameter sets: Sequence Parameter set (SPS) : Information pertaining to sequence of encoded picture Picture Parameter Set (PPS) : Information pertaining to one or more individual pictures The above mentioned parameters hardly changes and hence it need not be transmitted repeatedly and saves overhead. The parameter sets can be sent â€Å"in-band† which is carried in the same channel as the VCL NAL units. It can also be sent â€Å"out-of-band† using reliable transport protocol. Therefore, it enhances the resiliency towards data and error loss.  · Flexible Macroblock Ordering (FMO) FMO maps the macroblocks to different slice groups. In the event of any slice group loss, missing data is masked up by interpolating from the other slice groups.  · Redundancy Slices (RS) Redundant representation of the picture can be stored in the redundant slices. If the loss of the original slice occurs, the decoder can make use of the redundant slices to recover the original slice. These techniques introduced in the H.264/AVC makes the codec more robust and resilient towards data and error loss. 2.1.2. Profiles and Levels A profile of a codec is defined as the set of features identified to meet a certain specifications of intended applications For the H.264/AVC codec, it is defined as a set of features identified to generate a conforming bit stream. A level is imposes restrictions on some key parameters of the bit stream. In H.264/AVC, there are three profiles namely: Baseline, Main and Extended. 5 shows the relationship between these profiles. The Baseline profile is most likely to be used by network cameras and encoders as it requires limited computing resources. It is quite ideal to make use of this profile to support real-time streaming applications in a embedded platform. 2.2. Overview of Video Streaming In previous systems, accessing video data across network exploit the ‘download and play approach. In this approach, the client had to wait until the whole video data is downloaded to the media player before play out begins. To combat the long initial play out delay, the concept of streaming was introduced. Streaming allows the client to play out the earlier part of the video data whilst still transferring the remaining part of the video data. The major advantage of the streaming concept is that the video data need not be stored in the clients computer as compared to the traditional ‘download and play approach. This reduces the long initial play out delay experienced by the client. Streaming adopts the traditional client/server model. The client connects to the listening server and request for video data. The server sends video data over to the client for play out of video data. 2.2.1. Types of Streaming There are three different types of streaming video data. They are pre-recorded/ file streaming, live/real-time streaming and interactive streaming. * Pre-recorded/live streaming: The encoded video is stored into a file and the system streams the file over the network. A major overhead is that there is a long initial play out delay (10-15s) experienced by the client. * Live/real-time streaming: The encoded video is streamed over the network directly without being stored into a file. The initial play out delay reduces. Consideration must be taken to ensure that play out rate does not exceed sending rate which may result in jerky the picture. On the other hand, if the sending rate is too slow, the packets arriving at the client may be dropped, causing in a freezing the picture. The timing requirement for the end-to-end delay is more stringent in this scenario. * Interactive streaming: Like live streaming, the video is streamed directly over the network. It responds to users control input such as rewind, pause, stop, play and forward the particular video stream. The system should respond in accordance to those inputs by the user. In this project, both pre-recorded and live streaming are implemented. Some functionality of interactive streaming controls like stop and play are also part of the system. 2.2.2. Video Streaming System modules Video Source The intent of the video source is to capture the raw video sequence. The CCTV camera is used as the video source in this project. Most cameras are of analogue inputs and these inputs are connected to the encoding station via video connections. This project makes use of only one video source due to the limitation of the video connections on the encoding station. The raw video sequence is then passed onto the encoding station. Encoding Station The aim of the encoding station digitized and encodes the raw video sequence into the desired format. In the actual system, the encoding is done by the DM6446 board into the H.264/AVC format. Since the hardware encoding is CPU intensive, this forms the bottleneck of the whole streaming system. The H.264 video is passed onto the video streamer server module of the system. Video Streaming and WebServer The role of the video streaming server is to packetize the H.264/AVC to be streamed over the network. It serves the requests from individual clients. It needs to support the total bandwidth requirements of the particular video stream requested by clients. WebServer offers a URL link which connects to the video streaming server. For this project, the video streaming server module is embedded inside DM6446 board and it is serves every individual clients requests. Video Player The video player acts a client connecting to and requesting video data from the video streaming server. Once the video data is received, the video player buffers the data for a while and then begins play out of data. The video player used for this project is the VideoLAN (VLC) Player. It has the relevant H.264/AVC codec so that it can decode and play the H264/AVC video data. 2.2.3. Unicast VS Multicast There are two key delivery techniques employed by streaming media distribution. Unicast transmission is the sending of data to one particular network destination host over a packet switched network. It establishes two way point-to-point connection between client and server. The client communicates directly with the server via this connection. The drawback is that every connection receives a separate video stream which uses up network bandwidth rapidly. Multicast transmission is the sending of only one copy of data via the network so that many clients can receive simultaneously. In video streaming, it is more cost effective to send single copy of video data over the network so as to conserve the network bandwidth. Since multicast is not connection oriented, the clients cannot control the streams that they can receive. In this project, unicast transmission is used to stream encoded video over the network. The client connects directly to the DM6446 board where it gets the encoded video data. The project can easily be extended to multicast transmission. 2.3. Streaming Protocols When streaming video content over a network, a number of network protocols are used. These protocols are well defined by the Internet Engineering Task Force (IETF) and the Internet Society (IS) and documented in Request for Comments (RFC) documents. These standards are adopted by many developers today. In this project, the same standards are also employed in order to successfully stream H.264/AVC content over a simple Local Area Network (LAN). The following sections will discuss about the various protocols that are studied in the course of this project. 2.3.1. Real-Time Streaming Protocol (RTSP) The most commonly used application layer protocol is RTSP. RTSP acts a control protocol to media streaming servers. It establishes connection between two end points of the system and control media sessions. Clients issue VCR-like commands like play and pause to facilitate the control of real-time playback of media streams from the servers. However, this protocol is not involved in the transport of the media stream over the network. For this project, RTSP version 1.0 is used. RTSP States Like the Hyper Text Transfer Protocol (HTTP), it contains several methods. They are OPTIONS, DESCRIBE, SETUP, PLAY, PAUSE, RECORD and TEARDOWN. These commands are sent by using the RTSP URL. The default port number used in this protocol is 554. An example of such as URL is: rtsp://  · OPTIONS: An OPTIONS request returns the types of request that the server will accept. An example of the request is: OPTIONS rtsp://155.69.148.136:554/test.264 RTSP/1.0 CSeq: 1rn User-agent: VLC media Player The CSeq parameter keeps track of the number of request send to the server and it is incremented every time a new request is issued. The User-agent refers to the client making the request. * DESCRIBE: This method gets the presentation or the media object identified in the request URL from the server. An example of such a request: DESCRIBE rtsp://155.69.148.138:554/test.264 RTSP/1.0 CSeq: 2rn Accept: application/sdprn User agent: VLC media Player The Accept header is used to describe the formats understood by the client. All the initialization of the media resource must be present in the DESCRIBE method that it describes.  · SETUP: This method will specify the mode of transport mechanism to be used for the media stream. A typical example is: SETUP rtsp://155.69.148.138:554/test.264 RTSP/1.0 CSeq: 3rn Transport: RTP/AVP; unicast; client_port = 1200-1201 User agent: VLC media Player The Transport header specifies the transport mechanism to be used. In this case, real-time transport protocol is used in a unicast manner. The relevant client port number is also reflected and it is selected randomly by the server. Since RTSP is a stateful protocol, a session is created upon successful acknowledgement to this method.  · PLAY: This method request the server to start sending the data via the transport mechanism stated in the SETUP method. The URL is the same as the other methods except for: Session: 6 Range: npt= 0.000- rn The Session header specifies the unique session id. This is important as server may establish various sessions and this keep tracks of them. The Range header positions play time to the beginning and plays till the end of the range. * PAUSE: This method informs the server to pause sending of the media stream. Once the PAUSE request is sent, the range header will capture the position at which the media stream is paused. When a PLAY request is sent again, the client will resume playing from the current position of the media stream as specified in the range header. RSTP Status Codes Whenever the client sends a request message to the server, the server forms a equivalent response message to be sent to the client. The response codes are similar to HTTP as they are both in ASCII text. They are as follows: 200: OK 301: Redirection 405: Method Not Allowed 451: Parameter Not Understood 454: Session Not Found 457: Invalid Range 461: Unsupported Transport 462: Destination Unreachable These are some of the RTSP status codes. There are many others but the codes mentioned above are of importance in the context of this project. 2.3.2. Real-time Transport Protocol (RTP) RTP is a defined packet structure which is used for transporting media stream over the network. It is a transport layer protocol but developers view it as a application layer protocol stack. This protocol facilitates jitter compensation and detection of incorrect sequence arrival of data which is common for transmission over IP network. For the transmission of media data over the network, it is important that packets arrive in a timely manner as it is loss tolerant but not delay tolerant. Due to the high latency of Transmission Control Protocol in establishing connections, RTP is often built on top of the User Datagram Protocol (UDP). RTP also supports multicast transmission of data. RTP is also a stateful protocol as a session is established before data can be packed into the RTP packet and sent over the network. The session contains the IP address of the destination and port number of the RTP which is usually an even number. The following section will explain about the packet structure of RTP which is used for transmission. RTP Packet Structure The below shows a RTP packet header which is appended in front of the media data.s The minimum size of the RTP header is 12 bytes.. Optional extension information may be present after the header information. The fields of the header are:  · V: (2 bits) to indicate the version number of the protocol. Version used in this project is 2.  · P (Padding): (1 bit) to indicate if there padding which can be used for encryption algorithm  · X (Extension): (1 bit) to indicate if there is extension information between header and payload data.  · CC (CSRC Count) : (4 bits) indicates the number of CSRC identifiers  · M (Marker): (1 bit) used by application to indicate data has specific relevance in the perspective of the application. The setting for M bit marks the end of video data in this project  · PT (Payload Type): (7 bits) to indicate the type of payload data carried by the packet. H.264 is used for this project  · Sequence number: (16 bits) incremented by one for every RTP packet. It is used to detect packet loss and out of sequence packet arrival. Based on this information, application can take appropriate action to correct them.  · Time Stamp: (32 bits) receivers use this information to play samples at correct intervals of time. Each stream has independent time stamps.  · SSRC: (32 bits) it unique identifies source of the stream.  · CSRC: sources of a stream from different sources are enumerated according to its source IDs. This project does not involve the use of Extension field in the packet header and hence will not be explained in this report. Once this header information is appended to the payload data, the packet is sent over the network to the client to be played. The table below summarizes the payload types of RTP and highlighted region is of interest in this project. Table 2: Payload Types of RTP Packets 2.3.3. RTP Control Protocol (RTCP) RTCP is a sister protocol which is used in conjunction with the RTP. It provides out-of-band statistical and control information to the RTP session. This provides certain Quality of Service (QoS) for transmission of video data over the network. The primary functions of the RTCP are: * To gather statistical information about the quality aspect of the media stream during a RTP session. This data is sent to the session media source and its participants. The source can exploit this information for adaptive media encoding and detect transmission errors. * It provides canonical end point identifiers (CNAME) to all its session participants. It allows unique identification of end points across different application instances and serves as a third party monitoring tool. * It also sends RTCP reports to all its session participants. By doing so, the traffic bandwidth increases proportionally. In order to avoid congestion, RTCP has bandwidth management techniques to only use 5% of the total session bandwidth. RTCP statistical data is sent odd numbered ports. For instance, if RTP port number is 196, then RTCP will use the 197 as its port number. There is no default port number assigned to RTCP. RTCP Message Types RTCP sends several types of packets different from RTP packets. They are sender report, receiver report, source description and bye.  · Sender Report (SR): Sent periodically by senders to report the transmission and reception statistics of RTP packets sent in a period of time. It also includes the senders SSRC and senders packet count information. The timestamp of the RTP packet is also sent to allow the receiver to synchronize the RTP packets. The bandwidth required for SR is 25% of RTCP bandwidth.  · Receiver Report (RR): It reports the QoS to other receivers and senders. Information like highest sequence number received, inter arrival jitter of RTP packets and fraction of packets loss further explains the QoS of the transmitted media streams. The bandwidth required for RR is 75% of the RTCP bandwidth.  · Source Description (SDES): Sends the CNAME to its session participants. Additional information like name, address of the owner of the source can also be sent.  · End of Participation (BYE): The source sends a BYE message to indicate that it is shutting down the stream. It serves as an announcement that a particular end point is leaving the conference. Further RTCP Consideration This protocol is important to ensure that QoS standards are achieved. The acceptable frequencies of these reports are less than one minute. In major application, the frequency may increase as RTCP bandwidth control mechanism. Then, the statistical reporting on the quality of the media stream becomes inaccurate. Since there are no long delays introduced between the reports in this project, the RTCP is adopted to incorporate a certain level of QoS on streaming H.264/AVC video over embedded platform. 2.3.4. Session Description Protocol (SDP) The Session Description Protocol is a standard to describe streaming media initialization parameters. These initializations describe the sessions for session announcement, session invitation and parameter negotiation. This protocol can be used together with RTSP. In the previous sections of this chapter, SDP is used in the DESCRIBE state of RTSP to get sessions media initialization parameters. SDP is scalable to include different media types and formats. SDP Syntax The session is described by attribute/value pairs. The syntax of SDP are summarized in the below. In this project, the use of SDP is important in streaming as the client is VLC Media Player. If the streaming is done via RTSP, then VLC expects a sdp description from the server in order to setup the session and facilitate the playback of the streaming media. Chapter 3: Hardware Literature Review 3.1. Introduction to Texas Instrument DM6446EVM DavinciTM The development of this project based on the DM6446EVM board. It is necessary to understand the hardware and software aspects of this board. The DM6446 board has a ARM processor operating at a clock speed up to 300MHz and a C64x Digital Signal Processor operating at a clock speed of up to 600MHz. 3.1.1. Key Features of DM6446 The key features that are shown in the above are: * 1 video port which supports composite of S video * 4 video DAC outputs: component, RGB, composite * 256 MB of DDR2 DRAM * UART, Media Card interface (SD, xD, SM, MS ,MMC Cards) * 16 MB of non-volatile Flash Memory, 64 MB NAND Flash, 4 MB SRAM * USB2 interface * 10/100 MBS Ethernet interface * Configurable boot load options * IR Remote Interface, real time clock via MSP430 3.1.2. DM6446EVM Architecture The architecture of the DM6446 board is organized into several subsystems. By knowing the architecture of the DM6446, the developer can then design and built his application module on the boards underlining architecture. The shows that DM6446 has three subsystems which are connected to the underlying hardware peripherals. This provides a decoupled architecture which allows the developers to implement his applications on a particular subsystem without having to modify the other subsystems. Some of subsystems are discussed in the next sections. ARM Subsystem The ARM subsystem is responsible for the master control of the DM6446 board. It handles the system-level initializations, configurations, user interface, connectivity functions and control of DSP subsystems. The ARM has a larger program memory space and better context switching capabilities and hence it is more suited to handle complex and multi tasks of the system. DSP Subsystem The DSP subsystem is mainly the encoding the raw captured video frames into the desired format. It performs several number crunching operations in order to achieve the desired compression technique. It works together with the Video Imaging Coprocessor to compress the video frames. Video Imaging Coprocessor (VICP) The VICP is a signal processing library which contains various software algorithms that execute on VICP hardware accelerator. It helps the DSP by taking over computation of varied intensive tasks. Since hardware implementation of number cru