Thursday, October 31, 2019

Death penalty Essay Example | Topics and Well Written Essays - 1500 words - 1

Death penalty - Essay Example Death Penalty is the gravest of all punishments and the Roman Catholic Church’s take on this punishment has evolved over time.This change in view of the Church will be discussed in this paper thoroughly.The paper will start off by giving a history of the death penalty and its supporters and opponents in the past,in terms of societal and religious groups. Then the Roman Catholic Church’s notion on the death penalty and its development is studied. Death penalty is a serious punishment for those who have committed serious crimes such as murder and have caused harm to the society by terrorist acts. People’s and nations’ perceptions of the death penalty or execution changed over time. This perception changed according to the requirements of the era. In today’s era, where crime is pervasive but also unstoppable, there is little need for death sentences, according to many people out of which some are staunch Christians also. But why do we link the religion to death penalty? They are both different branches of the same subject. We link religion to this punishment because all religions teach respect for human life and the right of humans to their lives. We will see in our discussion how the Church changes course from defending the right of oneself to the right of human life in general. We will see how the teachings of the Church changed from supporting the death penalty as a punishment to opposing it.... We can easily say that there will be many groups who will support this as punishment and other sensitive and tender ones will oppose it as it is the violation of human rights. The question is why would anyone support it and/or oppose it People who support it believe that this punishment should be given to people who have committed serious crimes in the past because of which people have lost lives, directly or indirectly. It supports the idea of the old adage; "a tooth for a tooth". If these criminals took lives, their lives must be taken in return too. On the other hand, the opponents believe that death penalty is as bad as what the criminals did. If they killed people, it doesn't make it acceptable for authorities to become as dire as them and take their lives in return. Now that the debate is clear, we can get into the history of the death penalty. It was in the Eighteenth Century B.C. when death penalty laws were first established. However, by the Eleventh Century A.D., it was decided that only people who murdered will be hanged and not other. Death penalty was traditional in Europe also, for many centuries. People were being executed till mid-1700 but by late 1700, the US abolitionist movement started. As a result, by early 1800's many of the states concentrated the number of their death crimes by building more state prisons. In Britain, around a hundred of the crimes that were punishable by execution were eliminated (Death Penalty Information Center, 2008). By early 1900, people were executed using the electrocution method but nine states of the US had abolished the law. By 1920, the abolition movement of USA started to lose support and became unpopular. Soon after this, newer ways of executing people were being discovered. Around this

Tuesday, October 29, 2019

The Role of Engineer in Nation Building Essay Example for Free

The Role of Engineer in Nation Building Essay Why should a privileged person help an underprivileged person? As the definition suggests that the privileged person is someone who is having the special rights, advantages or immunities or having the rare opportunity to do something that brings particular pleasure. On the other hand the unprivileged person is someone who is not enjoying the same standard of living or rights as the majority of the people in the society. So in a socio economic point of view the presence of both the class cannot be ignored but with proper ratio. A society can’t only have one of the two to improve or else we can say the wheel of the society can’t be moved freely without the presence of the two but of course there should be a proper balance between these two. The law of the nature says that the stream flows from the top to bottom likewise the privileged person should come and hold the hand of the underprivileged person to move the society in a proper pace. Now it’s the time to think of the human values and morality of a human being if he/she is gifted with some advantages or right then it’s the duty of them to come and help the people who are deprived of. On the other hand the underprivileged person should be thankful and have sense of gratitude for the person whom he/she is grateful in any sense be it money, values or spiritualism. Even Mahatma Gandhi told this in another aspect such as: I want to write many new things but they all must be written on Indian state. I would gladly borrow from the west when I can return the amount with decent interest. So borrowing things from others is not a crime but one should not forget about to return with something greater to the person whom he/she is grateful to. This is a cycle of civilization and one can’t break the chain. The society is mixed with people and cultures, one should be aware of the fact that everybody is equally important and they should help each other to form a warm and healthy atmosphere to live for the next generations to come. Even the Nobel Laureate Amartya Sen in his book The Idea of Justice (2009) explained that ideal democracy demands to take from the rich and use honestly and wisely for the people. Moreover, Sen notes that in famines only a very small proportion of the population is affected—much less than 10%. Political pressure from this group alone would not be enough to force a democratic government to respond. It is the pressure from the non-suffering members of society that makes the difference. But if government officials in democracies don’t care about the starving unless they are threatened with a loss of power, why do members of the population who are not starving care about the starving? It seems that if compassion or solidarity moves non-starving citizens to advocate for famine victims, it would move government officials to respond to the famine. Even Bentham and Mill explained that west democracy instills an idea for the greatest good of the largest number. M.K.Gandhi denies the principle and said that it should be greatest good for all. So on a nutshell we can conclude that for maintaining a true democracy it is the need of an hour to help unprivileged people for the greatest good of the civilization.

Sunday, October 27, 2019

Superconducting Transition Temperature Determination

Superconducting Transition Temperature Determination Fabrication of YBa2Cu3O7-ÃŽÂ ´ and Determination of its Superconducting Transition Temperature A superconducting material is one which below a certain critical temperature exhibits, amongst other remarkable traits; a total lack of resistivity, perfect diamagnetism and a change in the character of the specific heat capacity. The BCS theory describes perfectly the phenomenon of superconductivity in low temperature superconductors, but cannot explain the interaction mechanism in high temperature superconductors. In order to determine the superconducting transition temperature of two laboratory fabricated batches of YBCO their resistivity and specific heat capacity were measured as functions of temperature. From resistivity measurements the two batches were found to have transition temperatures of 86.8( ±0.8)K and 87.8( ±0.4)K respectively which were used to infer their oxygen contents of 6.82( ±0.01) and 6.83( ±0.01) atoms per molecule respectively. These agreed with XRD data and the literature upper value of the transition temperature of 95K (with an oxygen content of 6.95). Specific heat capacity measurements of the first batch gave questionable confirmation of these results, but could not be performed on the second batch due to time constraints. 19 January 2010Page 14 of 14Josephine Butler College I. Introduction and Theory A superconducting material is defined as one in which a finite fraction of the electrons are condensed into a superfluid, which extends over the entire volume of the system and is capable of motion as a whole. At zero temperature the condensation is complete and all of the electrons participate in the forming of the superfluid. As the temperature of the material approaches the superconducting transition temperature (or critical temperature, given by Tc) the fraction of electrons within the superfluid tends to zero and the system undergoes a second order phase transition from a superconducting to a normal state.[i] The phenomenon of superconductivity was first observed by Kamerlingh Onnes in Leiden in 1911 during an electrical analysis of mercury at low temperatures. He found that at a temperature around 4K the resistance of mercury fell abruptly to a value which could not be distinguished from zero.[iii] The next great leap in experimental superconductivity came in 1986 when MÃÆ' ¼ller and Bednorz fabricated the first cuprate superconductor[v]. After its lack of resistivity one of the most striking features of a superconductor is that it exhibits perfect diamagnetism. First seen in 1933 by Meissner and Ochsenfeld, diamagnetism in superconductors manifests itself in two ways. The first manifestation occurs when a superconducting material in the normal state is cooled past the critical temperature and then placed in a magnetic field which will then be excluded from the superconductor. The second appears when a superconductor (in its normal state) is placed in a magnetic field and the flux is allowed to penetrate. If it is then cooled past the critical temperature it will expel the magnetic flux in a phenomenon know as the Meissner effect.[vi] This can be seen qualitatively in figure 1. In 1957, Bardeen, Cooper and Schrieffer managed to construct a wave function in which electrons are paired. Know as the BCS theory of superconductivity it is used as a complete microscopic theory for superconductivity in metals. One of the key features of the BCS theory is the prediction of an energy gap, the consequences of which are the thermal and most of the electromagnetic properties of superconducting materials. The key conceptual element to this theory is the formation of Cooper pairs close to the Fermi level. Although direct electrostatic interactions between electrons are repulsive it is possible for the distortion of the positively charged ionic lattice by the electron to attract other electrons. Thus, screening by ionic motion can yield a net, attractive interaction between electrons (as long as they have energies which are separated by less than the energy of a typical phonon) causing them to pair up, albeit over long distances. Given that these electrons can experience a net attraction it is not unreasonable that the electrons might form bound pairs, effectively forming composite bosons with integer spin of either 0 or 1. This is made even more likely by the influence of the remaining electrons on the interacting pair. The BCS theory takes this idea one step further and constructs a ground state in which all of the electrons form bound pairs. This electron-phonon interaction invariably leads to one of the three experimental proofs of the BCS theory. A piece of theory known as the isotope effect provided a crucial key to the development of the BCS theory. It was found that for a given element the super conducting transition temperature, TC, was inversely proportional to the square root of the isotope mass, M (equation 1). TCà ¢Ã‹â€ ?M-12 (1)[vii] This same relationship holds for characteristic vibrational frequencies of atoms in a crystal lattice and therefore proves that the phenomenon of superconductivity in metals is related to the vibrations of the lattice through which the electrons move. However this only holds true for low temperature superconductors (a fact which will be discussed in more detail at a later stage in this section). Both of the two further experimental proofs of BCS theory come from the energy gap in the superconducting material. The first proof is in the fact that it was predicted and actually exists (figure 2) and the second lies in its temperature dependence. From band theory, energy bands are a consequence of a static lattice structure. However, in a superconducting material, the energy gap is much smaller and results from the attractive force between the electrons within the lattice. This gap occurs Ά either side of the Fermi level, EF, and in conventional superconductors arises only below TC and varies with temperature (as shown in figure 3). Figure 2: Dependence of the superconducting and normal density of states, DS and Dn respectively. From Superconductivity, Poole, C.P., Academic Press (2005), page164 At zero Kelvin all of the electrons in the material are accommodated below the energy gap and a minimum energy of 2Ά must be supplied in order to excite them across the gap. BCS theory predicts equation 2 which has since been experimentally proven, ΆT=0=CkBTC (2) [viii] where theoretically the constant C is 1.76 although experimentally in real superconductors it can vary between 1.75 and 2.45. Figure 3: Temperature dependence of the BCS gap function, Ά. Adapted from The Superconducting State, A.D.C. Grassie, Sussex University Press (1975), page43 As before stated it has been found that the first of these BCS proofs does not hold for high temperature superconductors. In these materials it has been found that in the relation stated as equation 1, the exponential tends towards zero as opposed to minus one half. This indicates that for high temperature superconductors it is not the electron-phonon interaction that gives rise to the superconducting state. Numerous interactions have been explored in an attempt to try and determine the interaction responsible for high temperature superconductivity but so far none have been successful. Figure 4: A plot of TC against TF derived from penetration depth measurements. Taken from Magnetic-field penetration depth in K3C60 measured by muon spin relaxation, Uemura Y.J. et al. Nature (1991) 352, page 607. In figure 4 it can be seen that the superconducting elements constrained by BCS theory lie far from the vast majority of new high temperature superconducting materials which appear to lie on a line parallel to TF, the Fermi temperature and TB, the Bose-Einstein condensation temperature, indicating a different interaction method. One of the most extensively studied properties of the superconductor is its specific heat capacity and how its behaviour changes with temperature (seen in figure 5). It is known that above the transition temperature the normal state specific heat of a material, Cn, can be given by equation 3 (below) which consists of a linear term from the conduction electrons and a cubic phonon term (the addition Schottky contribution has been ignored in this case and ÃŽÂ ³ and A are constants). Cn=ÃŽÂ ³T+AT3 (3)[ix] Due to the aforementioned energy gap it is also predicted by BCS theory that at the superconducting transition temperature there will be a discontinuity in the specific heat capacity of the material of the order 1.43 as seen in equation 4 (where CS is the superconducting state heat capacity) and figure 5. CS-ÃŽÂ ³TCÃŽÂ ³TC=1.43 (4)[x] However for high temperature superconductors this ratio is likely to be much smaller due to a large contribution from the phonon term in the normal state specific heat capacity. Figure 5: Heat Capacity of Nb in the normal and superconducting states showing the sharp discontinuity at TC. Taken from The Solid State Third Edition, H.M Rosenberg, Oxford University Press (1988), page 245 Now that the concept of the high temperature superconductor has been explained this report can return to one of the initial concepts of how the behaviour of resistivity changes with temperature. A low temperature superconductor is likely to obey the T5 Bloch law at low temperatures and so its resistivity will fall to zero in a non-linear region. In contrast the resistivity of a high temperature superconductor should fall to zero before it leaves the linear region. The resistivity profile of a high temperature superconductor can also be used to determine its purity. By comparing the range of temperatures over which the transition occurs with the transition temperature itself an indicator of purity can be determined (equation 5, where PI is the purity indicator and ΆT the magnitude of the region over which the transition occurs). In this case a value of zero would indicate a perfectly pure sample. ΆTTC=PI (5)[xi] Other than for scientific purposes, within the laboratory, the biggest application of superconductors at the moment is to produce to the large, stable magnetic fields required for magnetic resonance imaging (MRI) and nuclear magnetic resonance (NMR). Due to the costliness of high temperature superconductors the magnets used in these applications are usually low temperature superconductors. It is for this same reason that the commercial applications of high temperature superconductors are still extremely limited (that and the fact that all high temperature superconducting materials discovered so far are brittle ceramics which cannot be shaped into anything useful e.g. wires). Yttrium barium copper oxide (or YBCO) is just one of the aforementioned high temperature, cuprate superconductors. Its crystal structure consists of two CuO2 planes, held apart by a single atom of yttrium, either side of which sits a BaO plane followed by Cu-O chains. This can be seen in greater detail in figure 6. Figure 6: The orthorhombic structure of YBCO required for superconductivity. Adapted from High-Temperature Superconductivity in Curpates, A. Mourachkine, Kluwer Academic Publishers (2002), page 40 If the structure only has 6 atoms of oxygen per unit cell then the Cu-O chains do not exist and the compound behaves as an antiferromagnetic insulator. In order to create the Cu-O chains and for the compound to change to a superconductor at low temperatures it has to be doped gradually with oxygen. The superconducting state has been found to exist in compounds with oxygen content anywhere from 6.4 to 7 with optimal doping being found to occur at an oxygen content of about 6.95.[xii] This report intends to determine the superconducting transition temperature of a laboratory fabricated sample of YBCO. This will be achieved by measuring how both its resistivity and specific heat capacity vary as a function of temperature. II.I Fabrication and Calibration Methods To ensure an even firing of the sample within the furnace and to find out where in the furnace the heating profile was closest to that of the actual heating program, three temperature profiles of the furnace were taken while heating. The length of the furnace was measured with a metre ruler and found to be 35 ±1cm. Four k-type thermocouples were then evenly spaced (every 11.5 ±0.5cm) along the length of it, as can be seen in figure 7 below. Figure 7: Transverse section of the furnace. Thermocouples are numbered 1 to 4 and the length of the furnace surrounded by heating coils is shown in green, blocked at either end by a radiation shield. Temperature profiles were taken for each of the temperature programs displayed in figure 8; all started at room temperature and were left to run until the temperature displayed by the thermocouples had stopped increasing. Figure 8: Details of furnace programs used to obtain the temperature profiles shown in section III. While this was being done samples of YBCO were fabricated. The chemical equation for the fabrication of YBCO is as follows in equation 6 and the amounts of the reactants required to fabricate 0.025 mol are displayed in figure 9 Y2O3+4BaCO3+6CuIIOà ¢Ã¢â‚¬  2YBa2Cu3O7-ÃŽÂ ´ (6) Figure 9: Quantities of reactants required to fabricate 0.025 mol YBCO. Relative molecular masses (RMMs) calculated using relative atomic masses The procedure for fabrication can be seen in figure 10 and using this technique two batches of YBCO were fabricated, the first yielded just one pellet and the second batch yielded four. Figure 10: Describes the steps taken during fabrication of superconducting YBCO samples. In order to obtain a more accurate value of the temperature within the sample space of the cryostat the resistance of a platinum thermometer was measured as a function of temperature. In order to do this a Pt100 platinum thermometer was varnished to one side of a cryostat probe and connected via a four point probe to a power source (as can be seen in figure 11), an ammeter and a voltmeter (Keithley 2000 DMMs). The ammeter and the voltmeter were connected to a computer in order that live data could be fed straight into a LabView program (appendix 2) which would record the data to both a much great accuracy and precision than could be done by a human. Although a stable and constant current was used it was felt, in the interest of good practise, necessary to add the live feed ammeter into the LabView program as tiny fluctuations in current could have potentially changed results which would not have been noticed otherwise. The probe was then placed in the sample space which was subsequently vacuumed (to a pressure of 810-4 Torr) and flushed with helium twice. The sample space was then left full of helium due to its high thermal conductivity. The cryostat was cooled with liquid nitrogen to a temperature of approximately 77K and the LabView program left to record the change in the resistance of the platinum thermometer (using Ohms law, V=IR) and its corresponding temperature (from the intelligent temperature controller or ITC) while the cryostat heated up naturally. The temperature increase function of the program was not used as leaving the cryostat to heat up as slowly as possible allowed data to be gathered over a much greater period of time which lead to a relationship with less error. This relationship was plotted in order that the temperature dependant resistance profile of the platinum thermometer could be incorporated into the LabView program for use in future experiments to determine more accura tely the temperature of the sample space. While this was being done the dimensions of the cut samples were measured using vernier callipers and weighed in order to determine a density for YBCO. Each dimension was measured six times (to reduce random error) by two different people (to reduce systematic error). The off cuts of each batch of YBCO were then sent off for X-ray diffraction analysis in order to determine the chemical composition of the fabricated samples. The diffraction was carried out using a wavelength of 1.54184Ç º. II.II Fabrication and Calibration Results, Analysis and Interpretation The three temperature profiles of the furnace can be seen below in figure 12. The results are slightly skewed due to one end of the furnace having been left open in order to allow the thermocouples to sit inside the furnace. This can be seen back in figure 7. The measurements were taken by eye over a 10 second time period. It was therefore decided that the errors on the time should be  ±5 seconds and the error on the temperature  ±1K, both of which are unfortunately too small to be seen on the profiles. The data points were fitted to cubic curves as this best matched the physical behaviour of the heating. Figure 12: Temperature profiles of the furnace. The temperature of the program is shown in black crosses and the temperatures of thermocouples 1, 2, 3 and 4 are shown in yellow, red, green and blue respectively. It can immediately be seen from figure 12 that, during the initial stages of heating, the temperatures of all of the thermocouples lag behind that of the furnace program, specifically those of the thermocouples at the open end of the furnace (1 and 2). This can be accounted for due to poor thermal insulation at the open end of the furnace. It can also be seen that as the furnace reaches its required temperature and begins its dwell time the temperatures of the thermocouples continue to rise for a short duration before also levelling out. The most likely reason for this is that once the furnace reaches its required temperature the program will instantaneously cut the current to the heating coils. They will still however have thermal energy in them which will leach through the ceramic inner of the furnace into the firing space itself. Another striking feature of the profiles that can be seen is that the longer the furnace has to reach the required temperature, the more linear the increase in temperature is throughout the furnace. It was therefore deduced that had the furnace been sealed at both ends with radiation rods and covers, then the centre of the furnace would be that which had a temperature profile closest to that of the furnace program. It was also decided that in order to ensure a steady, linear rate of heating, a slower increase in temperature would be used. The masses of the batches before and after calcinations were compared and were found to have decreased by an average of 2.44( ±0.01)% of their initial masses. This was expected as one of the by-products created during the calcination of BaCO3 is CO2 which would have been removed from the furnace during this heating period therefore reducing the mass of the compound. The weights of the samples from batch two before and after annealing were compared and it was found that each of the samples of YBCO had increased in mass by an average of 3.51( ±0.03)% of their initial masses. This was unexpected as during the annealing process the compound is reduced and so should lose mass. One possible explanation for this could be a simultaneous reduction and oxygen doping of the compound in order to try and fill the copper and oxygen chains shown in figure 6. The densities of both batches of YBCO were calculated by weighing each of the samples from that batch and dividing their masses by their measured volumes. The densities of batches one and two were found to be 5.25( ±0.04)gcm-3 and 3.5( ±0.1)gcm-3 respectively. The greater error stated with the value of the density of the second batch of YBCO is a result of an error on the mean being taken whereas the error on the density of the first batch is merely propagated from those of its volume and mass as there was only one sample. When literature values of the density of YBCO were consulted it was found that the compound has a variable density of anywhere from 4.4 to 5.3gcm-3.[xiii] When comparing this range to the experimentally determined values of this parameter it was found that the density of the first batch lay just inside the range whilst the density of the second batch lay well outside of the lower end of it. One possible reason for the very low value of the density of batch two could be that its samples were left in the press for less time than batch one during sintering. All samples were checked to see whether they exhibited the Meissner effect. All did and a photograph showing this can be seen below in figure 13 The X-ray analysis of the two laboratory fabricated batches of YBCO can be seen in figure 14 below. The intensities were recorded every 0.01 degrees and then scaled appropriately using the greatest intensity in order that they could be compared to each other. As can be seen in figure 14 when both data sets are overlaid negligible differences can be seen. This indicates that both batches have almost identical chemical compositions and structure. A reasonable amount of background noise can be seen accompanied by an offset from zero intensity which changes in magnitude as the angle of diffraction increases. This can be accounted for by two factors. The first being tiny random impurities in the batches obtained by fabrication outside of a totally clean environment. The second is that small levels of the initial reactants may have not formed the required compound during calcination and annealing. A standard diffraction pattern of YBCO produced using the same wavelength of radiation was taken from The Chemical Database Service and can be seen below in figure 15. When this is compared to the patterns of the two laboratory fabricated samples in figure 14 all of the same intensity peaks can clearly be identified. This would indicate that YBCO had been successfully fabricated. Figure 15: X-Ray diffraction pattern of YBCO6. Calculation of the structural parameters of YBa2Cu3O7-ÃŽÂ ´ and YBa2Cu4O8 under pressure, Ludwig H. A. et al., Physica C (1992) 197, 113-122. It was expected that the comparison of standard diffraction patterns of YBCO of different oxygen contents with those fabricated within the laboratory would allow their oxygen content to be deduced. This, however, could not be achieved as all of the standard patterns of YBCO found in journals and online databases from oxygen contents of 6 to 7 had extremely similar diffraction patterns. The resistance of the platinum thermometer was plotted against temperature and can be seen in figure 16. A linear relationship was fitted to the data as seen in figure 16 which produced a reduced chi squared value of 1.317 and equation 7. T=2.4958( ±0.0007)R+25.54( ±0.04) (7) The reduced chi value indicates a strong linear relationship while the equation of the line gives a resistance of 99.2( ±0.2)ÃŽÂ © at a temperature of 273.2( ±0.1)K. When compared to the technical data for this component which gives a resistance of 100.00ÃŽÂ ©[xiv] at a temperature of 273.15K, it shows very close correspondence although not within error. A temperature of one less significant figures accuracy had to be used in this calculation due to the inability of the ITC to measure temperature to any more than one decimal place. This slight difference between the reference and experimental values of the resistance of the Pt100 at a given temperature can be accounted for by the position of the ITCs heat sensor. This lies just outside of the sample space and would cause the ITCs heat sensor to detect a small increase in temperature before it was received by the Pt100 within the sample space. Thus causing the Pt100 to lag behind in temperature (even if only slightly) and would therefore cause the slightly lower resistance for the given temperature as calculated above and can be seen as a very slight systematic error. III.I Resistivity Methods One of the cut samples was fixed to the other side of the probe to the Pt100 with thermally insulating varnish and four copper wire contacts were painted onto it with electrically conductive silver paint. The separation of each of the four wires was measured with vernier callipers six times each by two different people for the same reasons as before and recorded for later calculation. A four point probe resistance measurement was used in order to avoid the indirect measuring of resistances other than just the sample resistance. The contact resistance and spreading resistance are also normally measured by a simple two point resistance measurement. The four point probe uses two separate contacts to carry current and two to measure the voltage (in order to set up a uniform current density across the sample) and can be seen in figure 17. In a four point probe the current carrying probes will still be subject to the extra resistances but this will not be true for the voltage probes which should draw little to no current due to the high impedance of the voltmeter. The potential, V, at a distance ,r, from an electrode carrying a current, I, in a material of resistivity, à ?, can be expressed by V=à ?I2à Ã¢â€š ¬r=à ?I2à Ã¢â€š ¬1S1+1S3-1S1+S2-1S2+S3 (8)[xv] where r has also been expressed in terms of the contact separations (figure 17). This can be rearranged in order to calculate the value of the resistivity of material being measured. The probe was once again inserted into the cryostat and the cryostat was cooled as detailed in section II.I. Once the sample had reached a temperature equal to that of the boiling point of liquid nitrogen a LabView program was left to run which recorded the resistance of the sample and its corresponding temperature. The program used to do this can be seen in appendix 2. Although a temperature increase function was built into the program, the cryostat was left to warm up naturally for the same reason used when calibrating the platinum thermometer. The set up for this can be seen below in figure 18. Figure 18. Schematic for the resistivity experiment. Vacuum pumps and pressure gauges have been omitted as well as the heater on the ITC as none of the bear any real relevance to the experiment. Data cables are shown in red, Pt100 in blue and sample in grey. This was repeated for each sample of fabricated YBCO at least twice and their temperature dependant resistivity profiles can be seen in section III.II III.II Resistivity Results, Analysis and Interpretations The resistance profile of the sample from the first batch was measured twice and these profiles can be seen in figure 19. Unfortunately it was not possible on this occasion to measure the four point probe contact separations on this first sample before it was removed and so these profiles could not be adjusted to those of resistivity using equation 8. However, as this transformation is simply a stretch in the y-axis, it does not change the behaviour of the transition or the value of the transition temperature obtained from the profile. It can be seen in figure 19 that although the first profile cuts out at approximately a temperature of 190 Kelvin, both profiles follow virtually the same path until that point. The first profile cuts out early due to data points being taken once every second causing the program to fail and shut down. The number of data points was then cut to one every three seconds for subsequent experiments. With measurements being taken automatically by computer (and with the Keithley multimeters ability to measure currents and voltages to 7 significant figures) the errors on the resistance were negligible ( ±0.003% of the value of the resistance) and so can not be seen in figure 19. The same is true of the errors on the temperature. Assuming that equation 7 is correct then with a  ±0.003% error on any calculated resistance, the temperature of the sample space should only have an error of  ±0.04K. Had each of the samples been perfectly pure their profiles would have a very sharp transition between the states and the transition temperature would be very clear. However as a result of the broadening of this transition due to the impurity of the samples a temperature could not be clearly defined. Had powerful enough graphing software been to hand and were the profile able to be fitted to any know curve on this software, the most reliable way to find the transition temperature would have been to plot the first derivative of resistivity with respect to temperature and then determine its maximum (corresponding to the point of inflection within the transition). This not being the case the temperature of the transition was approximated to be the temperature at the half way point in the drop between the two states. To ascertain at which points on the profile the change in state began and ended, separate lines of linear regression were fitted to the linear data in both the normal state and the superconducting state. These two lines of regression were extended closer and closer to the transition from either side until the adjusted R2 value of the lines of best fit was 0.999, which indicated an excellent linear fit. It was found upon inspection that the mid-point of the transition could be defined in two different ways; the mid point in resistivity and the mid point in temperature (the mid-point in resistivity obviously corresponding to slightly a different temperature than that found at the mid point of the temperature). This was due to a slight skew in the transition in the profile and so in order to clearly define the superconducting transition temperature a clearer approximation from the one stated before had to be made. It was therefore approximated that the temperature corresponding to the mid point in resistivity should be averaged with the mid point in temperature on the x-axis and the error be the temperature either side of this average value which either previous mid value lay. This can be seen more clearly in figure 20. Figure 20: Shows the method used to calculate the superconducting transition temperature using an expanded view of the first profile in figure 19. Lines of linear regression are shown in black either side of the area in which the transition occurs (in yellow). Both temperatures can be seen highlighted by dashed lines. By the use of this method it was determined that the transition temperatures for both of the profiles in figure 19 were 87.6( ±0.9)K and 86.0( ±0.4)K for the first and second profiles respectively. Although these do not agree with each other (within the confines set by the errors) an average was taken and found to be 86.8( ±0.8)K. The purity indicator was also calculated for each profil

Friday, October 25, 2019

East of Eden Essay: Steinbeck vs. Christ :: East Eden Essays

East of Eden: Steinbeck vs. Christ In the novel, East of Eden, John Steinbeck proposes the idea that man has much more control over his own destiny than many chose to believe-a conclusion reached from Steinbeck's own interpretation of the story of Cain and Able wherein God neither instructs Cain to master the sin which is crouching at his door, nor predicts that Cain will master it, but rather gives Cain the ability to choose. Taking the text out of context, Steinbeck uses it to convey the message that a man's destiny is up to himself and that the ability to choose to do what is right and wrong is as much a curse as it is a blessing. Steinbeck's interpretation is incorrect. By taking the clause thou mayest out of its context, Steinbeck twists the truth of free will and uses it to convey his own message: that a man, through his own free will, can shape and define his destiny. By reading the text in context-both the story of Cain and Able and the story of Christ, which is the accepted Christian message of the Bible as a whole-the message that thou mayest conveys is quite different in both meaning and gravity. The very context of the phrase tells its immediate meaning: "If you do what is right, will you not be accepted? But if you do not do what is right, sin is crouching at your door; it desires to have you, but [thou mayest] master it." In context, the phrase thou mayest is more than the blank check that Steinbeck makes it out to be; rather, it is a warning and an instruction. God gives Cain the warning that if he chooses not to do rightly, sin will conquer him; and at the same time, He offers hope and tells Cain he can and, in context, should choose to master that sin. The Biblical context of the story goes further, applying itself to life in general. As the whole of the Bible unfolds, the concept of free will is realized on a far greater magnitude than Steinbeck applies it. All humanity is subject to the harassment of a sinful nature and a fallen world. "There is no one righteous, not even one; there is no one who understands, no one who seeks God." Therefore, instead of the uninfluenced freedom to choose his

Thursday, October 24, 2019

Face-to-Face Communication Is a Better Way of Communication Essay

Imagine that your sweetheart keeps talking to you through telephone, the Internet or letters and refuses to meet you face to face even for a meal, what will you do? If I were you, I must be getting crazy! But things like this often happen in nowadays’ society. With the development of communication industry, people are getting used to various so-called fast ways of communication. Personally, however, no matter how fast and convenient those other communication modes can be, I think we should never abandon the most original way of communication—face-to-face communication, which is more vivid, interactive and easier for us to promote relationships with others. Face-to-face communication can make it more fun and vivid to talk to others, because it contains much more nonverbal languages than other ways of communication. When you talk to a person face to face, you make eye contacts with each other, by which both of you can exchange your inside emotions. Furthermore, by observing the person’s gestures, you can also dope out his personalities and decide what kind of person he is. And maybe the least important one is that, as the old saying goes,† all men search for beauty†, you can view the appearance of the person you are talking to, which might cause you rub out the birth of love if both of you are satisfied with each other’s appearance. All of these make it attachable for us to feel the person we are talking to is a real and touchable individual. There are times when you have to deliver exact information to other people, and at such times face-to-face communication will be your first choice, because it creates an interactive and efficient conversation. Firstly, when you are talking to a person face to face, both of you can raise questions about anything you can’t understand, so that the other person involved can explain it clearly in time, which contributes a lot to eliminate the misunderstandings and barriers of your communication. Secondly, a person’s tone and voice can suggest his present mood, which can make it easier for you to perceive his subtle changes of emotion. Finally, in face to face communication, you can tell whether the words the person spoke are authentic through observing his facial expression , which can also ensure that the person you are talking to is a faithful one. All of these can make your talking more successful and efficient especially when you are negotiating with someone. Maybe the biggest advantage of face-to-face communication s that it can deepen your relationship with others, because it can kill the distance among people. When communicating face to face, you can see the smile on the person’s face, which will make you feel warm and kind; you can hug each other when you’re getting excited; and even a handshake can make you feel the respect from the other person. All that can make you get closer to each other, which may be hardly made attachable by communicating through telephone or e-mail. For instance, we are far away from home as college students, even if we call our parents almost everyday, we still feel homesick and lonely. Why? Because telephone can never make us feel as close as meeting each other face to face. And so does it among friends, if we don’t meet each other face to face as often as possible, we will soon feel that our relationships are getting cold. In conclusion, with all factors taken into consideration, I totally agree that face to face communication is better than any other type of communication. Now, try to communicate with people face to face and you will find it more colorful and efficient than calling others through telephone or greeting each other just by sending an e-mail!

Tuesday, October 22, 2019

Education for Learners with Diverse Needs

This paper is design to make an apprehension of larning disablements, communicating upsets, double diagnosing every bit good as giftedness. In add-on to the constitution of bring forthing a positive acquisition environment for kids with damages will maximise their accomplishment. To understand each type of disablement reference above we should look at the features, causes, and definitions of each signifier of disablement and upset to better heighten the acquisition environment for both the pupil and the instructor. As a particular pedagogue, it is an imperative facet to remain abreast of all upsets we come in contact with to bring forth a quality instruction for all those involved. For many pupils with disablements and for those without, the cardinal success in the schoolroom lies in holding version, adjustments and alterations made to the course of study and direction and other schoolroom activities. Learning Disabilities There are many definitions of larning disablements. However, the most use comes from Persons with Disability in Education Act ( IDEA ) . It defines larning disablements as assorted cognitive or psychological upsets that impede the ability to larn, particularly on that interferes with ability to larn math or develop linguistic communications accomplishments ( listening, reading, authorship, and talking ) ( IDEA 2004 ) . Some features of larning disablements are kids holding a shortages in the country of reading and written linguistic communication that can non do connexion with similar constructs in larning math ( can non link 3 + 5 = 8 when asked 5 + 8 peers ) , trouble in believing in consecutive or logical order, holding behaviours in the country of non being organized and losingss things. No 1 is precisely certain what causes larning disablements. Experts are non certain to the causes. The differences in how a personaa‚Â ¬a„?s encephalon plants and how it process information can be from encephalon harm, heredity, job during gestation and the environment the individual lives in. Presently there is a prevalence figure of 45.3 % of school- elderly kids in the United States classified as holding a specific acquisition disablement and have some sort of particular instruction support ( United States Office of Special Education, 2007a ) . Communication Disorders Communication Disorders is the address and linguistic communication upsets that relate country such as unwritten and motor map. It can be verbal, gestural or a combination of both. It revolves three constituents ; transmitter, message and receiving system. Language ( the system of symbols used to show and have significance ) is a factor in each component of the procedure ; address ( the systematic production of sound ) is a factor in verbal communicating. . Communication upsets include speech upsets of articulation, eloquence, and voice, and linguistic communication upsets. It may run from simple sound repeats, such as stuttering, to occasional misarticulation of word and complete inability to utilize address and linguistic communication communicating. A kid who is linguistic communication impaired should demo accomplishments in the primary linguistic communication that are below those expected for his/her chronological age. The prevalence of linguistic communication shortages in the school-age population in the United States is about 2.5 % . and 50 % of kids who receive particular instruction services from other disablements ( Hall et al. , 2001 ) . An apprehension of normal forms of linguistic communication acquisition is an of import portion of placing kids with linguistic communication upsets and developing redress plans for them. It besides involves testing, measuring, naming and doing appropriate arrangement determinations. Giftedness Gifted kids may demo outstanding abilities in a assortment of country including rational, academic aptitude, originative thought, leading and the ocular and executing humanistic disciplines. They besides show the ability to happen and work out jobs rapidly. The full development of the talented pupil depends on his or her environmental context, strong encouragement, and support from the household and societal groups ( Sydney Marland 1972 ) . Longitudinal surveies of talented kids indicated that most of them are healthy and good adjusted and achieve good into maturity, with some exclusions that are underperformers. Teaching cognitive schemes, job determination, job resolution, and creativeness are some features that particular plans focus on for talented pupils. Effective job determination and job work outing accomplishments depend on the individualsaa‚Â ¬a„? flexible usage of his or her cognition, construction and creativeness. In add-on, it depends on the capacity for divergent thought, a willingness to be different and strong motive. Underachievers have feeling of lower status, outlook of failure and low ego assurance. The prevalence of giftedness is about 10 % to 55 % of the school-age population of kids who are identified ( Gagne, 2003 ; Renzulli & A ; Reis, 2003 ) . To bring out the abilities of kids who come from cultural subgroups, particular designation methods and processs that depend less on anterior cognition and experience and more on logical thinking and originative thought are necessary. Children with physical and centripetal disabilities can be intellectually gifted, but frequently their abilities are undiscovered because pedagogues do non seek for their particular endowments. Double Diagnosis Fredericks Baldwin ( 1987 ) suggested that the term double diagnosing be used with great attention, mental wellness upsets is one disablement with secondary features turning out of the deficiency of environmental input that is from the centripetal disablement. Unfortunately, some kids with certain damages struggle in category and have behavior jobs. Often these conditions may be a consequence from holding to fight in category or emotional wellness that possibly cause by attending shortage upset ( ADD ) or Attention deficit/hyperactive upset ( ADHD ) . However, IDEA has a job in the figure of kids that qualify as a handicapped. Furthermore, Pinborough-Zimmerman, Satterfield, Miller, Bilder, Hossain and MaMohn ( 2007 ) findings confirm that 6.3 % of school aged kids were having address therapy services and co-concurring conditions like rational disablements, autism spectrum upset and emotional behaviour upsets. In the public school system the Numberss have a sedate deduction to supply indispensable service for these kids. Course of study There should be a distinction course of study to function all scholars, irrespective of ability, disablement, age, gender or cultural and lingual background. Curriculum should be modified suitably. First there should be alteration of larning disablements in the country of math, reading and linguistic communication. In communicating upsets the instructor should do certain she speak with pupils with damage the same manner he/she speak to the regular instruction pupils. The course of studies for talented pupils are lesson, assignments, and agenda alteration are lesson generated toward higher order of thought, content alteration, and promote group interaction. Some theoreticians besides suggest that course of study demand to be in footings of the acquisition environment. The cardinal characteristics of educating a kid with any disablement or upset is to concentrate on orienting the course of study in the countries of strengths, failings, demands, involvement, ability and feature of the kid. It is of import to understand the differences in order to indentify, buttocks, evaluate and rectify the pupil. Decision Finally, the of import of regular instructors and particular pedagogues are arm with the cognition, preparation and information in respects to disablements. Students with communicating upsets, giftedness, and besides any other learning disablements can larn and be successful in faculty members. Professional can fix course of study and appreciate the critical characteristics of services for particular need pupils. By modifying lessons for pupil and giving adjustments to these pupils with other schoolroom activities.

Factors Affecting the Success of Mega

Factors Affecting the Success of Mega Introduction In the recent past, the popularity of events management and related projects has increased. According to Bladen (2010), the phenomenon involves the application of project management concepts to the administration of events and occasions. In this paper, the author will analyse a number of contemporary issues affecting the management of these undertakings.Advertising We will write a custom critical writing sample on Factors Affecting the Success of Mega-Events specifically for you for only $16.05 $11/page Learn More To this end, the author will review 6 articles reporting on various issues in this field. A key theme affecting the operations of a manager operating in this field in each of the articles will be identified and critically reviewed. Contemporary Issues in Events Management Media Representation of Volunteers at the Beijing Olympic Games (Charles Richard Bladen) Volunteers play a major role in the management of activities related to many events. Bladen (2010) explores how the media represents volunteers in sports. Bladen analyses this issue from the perspective of the 2008 Beijing Olympics. The major theme in this article is the portrayal of volunteers in mainstream media. According to Bladen (2010), the Chinese and foreign media houses varied in their coverage of assistants involved in the Olympics. The local media portrayed these individuals as the force behind the success of the mega-event. The foreign media, on the other hand, treated volunteers as a ‘front’ for the Chinese government. According to Bladen (2010), Chinese media tended to glorify the parties, while foreign media focused on their shortcomings. The media is a very influential force, especially in gauging the success of managing an event like the Olympics. In some cases, media outlets can distort the outcomes of an event. Such distortions can occur in situations where there are conflicting representations, like in the Olympics Games. Bla den (2010) feels that the reporting of volunteering in sporting events lacks sufficient research. The media pays more attention on the individuals engaged in the sports and the actual games, ignoring the parties that provide their services for free. In future, the media can be harnessed to manage mega-events, such as the Olympics. The success can be achieved by acknowledging the individuals behind the preparation and execution of such activities. Beijing Olympics Games would have been depicted as a success if the conflicting representation of the volunteers did not give rise to extraneous issues, such as politics. Bladen (2010) addresses the problem of differing representation of volunteers in the games by analysing the major issues revolving around their roles. The motives behind the activities of these individuals are established by focusing on their duties and how they are treated by the media.Advertising Looking for critical writing on business economics? Let's see if w e can help you! Get your first paper with 15% OFF Learn More The misrepresentation of volunteers in the Olympics Games had negative impacts on Chinese legacy. The misinformation raised questions about China’s sincerity and competencies in managing such events. The Chinese were depicted by the media as friendly and accommodating hosts. However, their government was regarded as ‘Big, Bad China’ (Bladen 2010). The biased reporting in the media cannot be ignored. Such skewed representation extends to the treatment of volunteers by local and international news agencies. As an events manager, the author of this paper feels that the media plays a significant role in the success of events. In addition, the invaluable contribution of volunteers cannot be ignored, irrespective of their skills. However, the coverage of events by the media should be independent from popular themes and attitudes surrounding the culture or politics of the people. The Conceptual isation and Measurement of the Legacy of Mega Sporting Events (Holger Preuss) The legacy of any event significantly influences the management of similar occasions in the future. In their article, Preuss (2007) reviews the nature of the legacies left behind by large scale sporting affairs. The impact of such events is the major theme in this article. Definition of the term legacy, especially in relation to events, is not clear cut. As a result of this, the International Olympic Committee has made efforts to clarify sporting activities and their impacts. The value derived by communities or sports organisations from games, as well as the value of the sporting facilities, constitutes the legacy of sports events. According to Preuss (2007), the effects of any sports undertaking on the community and on other stakeholders can either be positive or negative. They can also be planned or unplanned. The impacts of the event on sporting structures may persist for a long time. As an events manag er, it is important to note that the intended and unintended legacies of a mega sporting undertaking determine the management of the entire undertaking. In addition, the benefits that members of the community derive from the occasion determine its success or failure. Considering the massive investments made in large scale sporting events, the manager should take the lasting legacy very seriously. The impacts are part of the occasion’s return on investment.Advertising We will write a custom critical writing sample on Factors Affecting the Success of Mega-Events specifically for you for only $16.05 $11/page Learn More Gauging the lasting impression of sports is very important in the management of these undertakings in the future. For instance, the manager should determine the extent to which the event benefits members of the society. To this end, those undertakings that have positive impacts on the community should be prioritised. There are several met hods used in measuring the legacy of large scale sporting affairs. They include the benchmark, the top-down, and the bottom-up approaches (Preuss 2007). As an event manager, the author of this paper agrees with Preuss that the bottom-up approach is more comprehensive, effective, and adequate compared to the rest. It is important to determine the structural changes brought about by ‘super-events’ (Preuss 2007). In addition, the manager should gauge the emotional impacts of the occasions, as well as their impacts on the image of the country. For instance, the enhancement of the country’s image as a result of hosting the Olympics is a major aspect of the event’s legacy. In the opinion of this author, the future of such large scale sporting organisations as the FIFA World Cup depends on their legacies in relation to the hosting nation. With regards to the current global economic turmoil, countries are taking the issue of the impacts of events very seriously. A s a manager, this author will strive to enhance the effects of large occasions using the pre-event, event, and post-event legacy framework. Hosting Business Meetings and Special Events in Virtual Worlds: A Fad or the Future? (David M. Pearlman Nicholas A. Gates) The contemporary world is characterised by significant developments in relation to information and technology. Today, technology is emerging as an essential aspect of almost all human undertakings. Events management is one of the areas in the modern world where technology is utilised. The application of technology in managing events is the major theme in this article. Pearlman and Gates (2010) carried out a study to determine virtual reality and its significance to contemporary organisations. The two sought to examine the adoption of this technology in businesses, special parties, and meetings. The viability of virtual reality applications in today’s business world was also analysed. According to Pearlman and Gates ( 2010), the term ‘virtual reality’ is used in reference to computer-simulated environments. The technology is used to ‘imaginarily’ replicate the real world. A number of computer applications are used to generate 3D visual environments that constitute the virtual world.Advertising Looking for critical writing on business economics? Let's see if we can help you! Get your first paper with 15% OFF Learn More Most professionals lack information on virtual reality applications. However, in spite of these inadequacies, the use of this technology in the business world is on the rise. Some of the applications available in the market include WebEx, 3D SL, and ON24 (Pearlman Gates 2010). The most significant contribution of virtual reality to the profession is the development of virtual events. They are gaining popularity because of several factors. Users are becoming used to online platforms. Maturation of virtual technologies and the availability of high bandwidth are some of the other factors enhancing virtual events. Holding virtual conferences and such other undertakings reduces operational costs in the organisation. Such reduced expenditures have increased the popularity of these kinds of meetings and conferences. In spite of the economic benefits associated with this technology, Pearlman and Gates (2010) note that some organisations are reluctant to adopt virtual reality. The study ill ustrates uncertainties about the future of this technology as one of the reasons behind the reluctance. However, considering the advantages associated with virtual events, these doubts are unjustified. Reports of similar undertakings hosted virtually by such organisations as IBM and American Cancer Society highlight the reliability and usefulness of these applications (Pearlman Gates 2010). Global pandemics, such as Influenza, and an increase in travel costs, have led to reduced physical participation in conventions and such other business gatherings. Virtual events have little or no carbon footprint. Such an attribute is important in the contemporary world where people are concerned with global warming. It is important to note that holding large scale events on the virtual platform is a difficult undertaking. In spite of these difficulties, it appears that the growth of these undertakings will increase in the future. Furthermore, simulating mega-events enhances the success of actu al ground occasions. The Effects of Facebook Users’ Arousal and Valence on Intention to go to the Festival: Applying an Extension of the Technology Acceptance Model (Woojin Lee, Lina Xiong, and Clark Hu) The influence of social media platforms on marketing is a force to reckon with in events management. Large groups of people and corporations come together on social media sites. The link between Facebook as a social medium and management of activities is the main theme in the report cited above. Lee, Xiong, and Hu (2012) acknowledge the influence of social media on events marketing in the contemporary world. The sites make it possible to communicate directly with potential event attendees or the target audience. In addition, the gathering of first hand reactions and suggestions regarding events is made easy. Lee et al. (2012) sought to determine whether Facebook users actually respond to events communicated through the social media site or not. Lee et al. (2012) used the tech nology acceptance model (TAM) to assess how arousal and valence influenced the response of Facebook users in relation to events marketed via the site. The theory of reasoned action (TRA) forms the basis of TAM. It explains the construction of behaviours by individuals (Lee et al. 2012). According to Lee et al. (2012), individuals’ reaction to technology is informed by its perceived ease of use and applicability. Using TAM, Lee et al. determined that emotions are a major factor as far as responding to a Facebook marketing event is concerned. The importance of Facebook as a marketing tool is undeniable. For instance, every month, approximately 3.5 million events are advertised on the site (Lee et al. 2012). The sheer volume of users makes the site a prime tool for managers keen on wooing attendees, especially in relation to mega-events. However, ensuring that the users respond to the advertisements is a different matter altogether. Lee et al. (2012) found that users who experie nce high levels of arousal and valence from the advertisement are more likely to access Facebook pages than their counterparts. Such users are also more likely to respond to the events marketed there compared to other individuals. Technology advancements give rise to new marketing options. Organisers adopt the most effective of these alternatives. Understanding the factors influencing these options will ultimately determine the success of marketing. Social media marketing is important in reaching out to those users that are technologically savvy. Facebook is one of the most popular platforms used in relation to this form of marketing. To determine the future responsiveness of these users, managers should focus on the perceived value of this social site. The event page should be easy to navigate. Updating the content on such pages will also enhance the success of future events. The Development of Competitive Advantage through Sustainable Event Management (Stephen Henderson) The artic le by Henderson (2011) revolves around the theme of sustainable event management. Henderson (2011) emphasises the need for organisations and event organisers to meet their projected desires. The managers can achieve this through sustainable application of both human and physical resources. Sustainability implies a form of development that meets the present human needs. The development also makes some compromises to help future generations meet their own needs. The definition of sustainable events encompasses several issues. In a broad sense, the definition brings together both the process and the outcome or product of the event. The two aspects imply undertakings organised to meet sustainable standards to enhance the benefits accrued to the audience (Henderson 2011). To this end, a sustainable event should be beneficial to the people and the planet as a whole. In addition, it should meet the interests of the investors. Public and private sector occasions differ in relation to sustai nable management. The former are more concerned with the public welfare. Organisers of such undertakings strive to help the people and to safeguard the environment. On the other hand, management of private sector events mainly focuses on profit generation at the expense of the people and the planet (Henderson 2011). Sustainable coordination of activities may be compromised when competitive advantage is sought. However, cost leadership and focus and differentiation strategies can be used to enhance sustainability without negatively affecting the profitability aspect of these investments. Differentiation and focus approach is people oriented. The event organiser focuses on the delivery of unique utility to the audience. Both approaches enhance profits since the consumers are drawn to the event by the qualities they desire. As such, they are likely to contribute generously to support the process. Conflicts are likely to occur between sustainable management and cost leadership, especial ly with regards to the creation of competitive advantage. It is noted that most of the strategies used in lowering costs disregard the sustainability aspect of the event. For instance, generation of green energy to support the activities associated with Olympics may be costly compared to the use of fossil energy. Such an event addresses the issue of sustainability, but negatively impacts on profitability. To realise sustainability, future event organisers should try to combine the various competitive elements of management. An event geared towards differentiation and focus is more likely to achieve the sustainability target. The same is not guaranteed when a competitiveness strategy is adopted. Sustainability is an important element in contemporary business management. Managers should realise the importance of upholding sustainability in their undertakings. Sustainability-oriented societies will most likely respond to sustainable events, irrespective of the price they are required t o pay to enjoy such undertakings. Relationship Marketing of Services: Growing Interest, Emerging Perspectives (Berry 1995) Berry (1995) addresses the theme of relationship marketing of services in the context of events management. The author views the concept as a collection of activities involved in the attraction, maintenance, and enhancement of client relationships. It is noted that most mega-events are products of multiple services organisations. As such, the importance of relationship marketing in this field is irrefutable. The analysis of Contact Theatre relationships and marketing of services by Berry (1995) brings to light some essential aspects of relationship marketing. The interaction between the theatre and the various stakeholders reveals the framework adopted by this organisation in promoting its services. Berry (1995) regards the nature of interactions as a vital element in the success of the theatre. The success is especially determined by the response to the convent ions and other gatherings held. Contact Theatre nurtures relationships with various individuals involved in the running of the business. They include, among others, teachers and youthful workers. The interactions with local, national, and international arts organisations highlight this connection. The link between arts directors and members of staff, for example, indicates internal relationships. According to Berry (1995), marketing differs depending on the nature of relationships exhibited in an organisation. The differences are inevitable since the roles of the individuals or groups in the interaction also differ. For instance, the marketing of internal engagements should focus on attracting and developing qualified employees (Berry 1995). Internal employees and the audience are the most important stakeholders with regards to the activities carried out at Contact Theatre. As such, internal marketing is essential since the services produced involve performance. To this end, the emp loyees are the performers (Berry 1995). Collaboration with the audience is the only means through which the theatre can achieve its objectives. The success of Contact Theatre is measured using the status of the relationships it has with stakeholders and the response of the audience. The more people respond to artistic events, the more the success of the managers. Contact Theatre is a non-profit organisation. In the opinion of this author, management in this entity differs significantly with the coordination of activities in private commercial organisations. The objectives of the latter involve the establishment of relationships geared towards the generation of revenue. On the contrary, Contact Theatre focuses on sustainable relationships with the society and other stakeholders. The internal and external stakeholders regard their relationship with Contact Theatre positively. The former regard this engagement as an open undertaking, leading to high levels of satisfaction and mutual in terest. The external stakeholders view their interaction with the theatre as representative of all groups. References Berry, L 1995, ‘Relationship marketing of services: growing interest, emerging perspectives’, Journal of the Academy of Marketing Science, vol. 23. no. 4, pp. 243-245. Bladen, C 2010, ‘Media representation of volunteers at the Beijing Olympic Games’, Sport in Society, vol. 13. no. 5, pp. 728-796. Henderson, S 2011, ‘The development of competitive advantage through sustainable event management’, Worldwide Hospitality and Tourism Themes, vol. 3. no. 3, pp. 245-257. Lee, W, Xiong, L Hu, C 2012, ‘The effect of Facebook users’ arousal and valence on intention to go to the festival: applying an extension of the technology acceptance model’, International Journal of Hospitality Management, vol. 31. no. 1, pp. 819-827. Pearlman, D Gates, N 2010, ‘Hosting business meetings and special events in virtual worlds: a fad or the future?’, Journal of Convention Event Tourism, vol. 11. no. 1, pp. 247-265. Preuss, H 2007, ‘The conceptualisation and measurement of mega sport event legacies’, Journal of Sport Tourism, vol. 12. nos. 3-4, pp. 207-227.

Sunday, October 20, 2019

The eNotes Blog Happy Earth Day!

Happy Earth Day! This Earth Day were taking inspiration  from literatures greatest nature-lovers, the transcendentalists: Today we celebrate Earth Day, an annual event dedicated to environmental protection. Surprisingly, some of the earliest conservationists in history can be found in American literature. The transcendentalists, whose movement developed during the 1820s and 30s, displayed a deep appreciation for the natural world and wrote avidly about their own experiences in nature. So frequently we approach climate change as a monolithic issue, impossible to tackle and incomprehensible in terms of personal philosophy. But perhaps Ralph Waldo Emerson, Henry David Thoreau, and Walt Whitman had it right; their steadfast appreciation and attempts at understanding the value of the natural world led them to be ever mindful of their surroundings. If we were to put these ideals into conversation with todays problems, we may find some distinct similarities, as well as some helpful insight into the philosophical value of nature for mankind. Consider this line from Emersons Nature, we distrust and deny inwardly our sympathy with nature. His belief that mankind has an innate sympathy for nature that is denied is important when considering our own attitudes towards the natural world. According to Emerson our own understanding and intelligence is hindered by our stubborn distrust of the intrinsic value of nature. By reaching an appreciation for and grasping an understanding of nature as a valuable aspect of our existence, one may come to understand what makes the transcendentalists so foundational in the conservationist philosophy. By looking at their writings we may find instances of universality, musings upon the idea that all things are connected by a similar spirit. As an individual seeking to live a more sustainable lifestyle, eager to make a difference in the preservation of our earth, perhaps one may find solace in this sort of philosophy. By aiming for a reconciliation between reality and philosophy, one may better rationalize everyday actions with those that closely align with the movement towards a sustainable future. In a world so obsessed with convenience, it is understandable why we as individuals  struggle to adopt a sustainability oriented lifestyle, but with the green and DIY movements gaining momentum, there seems to be collective hope for a cleaner, greener future. By simply implementing new habits like recycling, carpooling, composting, and conserving water and energy, theres opportunity as individuals to make an impact. It may seem strange to draw comparisons between early American literature and the current climate change issues, but by examining the attitudes held by the transcendentalist writers, and those held by modern day conservationists, one may come to see many similarities. Its possible that by recognizing these similarities, people may be able recognize that while the task at hand is almost unbelievable, theres hope in securing a personal philosophy of reverence for nature. For more information on the transcendentalist writers see the links below! /topics/ralph-waldo-emerson /topics/henry-david-thoreau /topics/walt-whitman

Saturday, October 19, 2019

Movie review Example | Topics and Well Written Essays - 250 words - 5

Movie Review Example Many of the music and songs instead reflect the nightlife and cabaret culture of the time. This, in my opinion, is a very effective device in transforming the movie into an interesting and original take on what otherwise could have been a very standardized movie. Unlike most other musicals, it also integrates songs into the narrative, to elaborate and comment on the storyline, instead of isolating them as separate elements. The setting of the movie in Berlin in the 1930’s and the focus on nightlife and romantic relationships, sets the movie up for some unexpected musical numbers. Instead of the nightlife being portrayed as a blissful escape from the impending horrors of the outside world, it is shown as extremely seedy and somewhat distasteful in its’ indulgence of decadent behaviour. One of the first musical numbers is a flirtatious, provocative number, performed by the protagonist, Sally Bowles. The song ‘Cabaret’, perhaps the most well-known of all the musical numbers in the movie, is in my opinion, the darkest and most effective song performed. The lyrics and performance of the song are high-spirited, careless and jovial, utterly contrasting with the environment within the movie – both geographically and within the seedy Kit Kat Klub itself. The way in which the songs provide a commentary for the movie, and are integrated within the dialogue, to an extent, is a useful technique. Instead of being separated from the movies development, they are made a part of the development, elaborating on information, feelings and occurrences, much in the same way as spoken dialogue. This provides another interesting and effective use of song, which makes the movie stand out as an original creation. Another effective use of song can be found in the contrast between the opening and closing performances of the song ‘Willkommen’. At the beginning, it is performed in a

Friday, October 18, 2019

Business Ethics Essay Example | Topics and Well Written Essays - 750 words - 1

Business Ethics - Essay Example In summary, Enron collapsed due to bankruptcy that was associated with a major audit failure of the company’s books of accounts. The bankruptcy led to major losses to shareholders, highlighted by a dramatic fall in prices from 90US dollars to less than a dollar within one year (Thomas, 2002). This was followed by investigations and summoning of the company’s executives who were later sentenced in various prisons. WorldCom was a big company involved in telecommunications business. It was declared bankrupt around July 2002 due to an accounting fraud, but later reemerged for business in 2004 after changing names to MCI (Tolunay et al., 2005). WorldCom was regarded as one of the largest telecommunications company operating in the United States, where it had expanded from Mississippi in 1983. Downfall of WorldCom begun when it started experiencing diminishing infrastructural demands due to the oversupply of telecommunications, and as a result, its revenues had fallen since t he debt was used to finance huge infrastructure investments. Thus, the main cause of demise of WorldCom was the increasing of net income and assets through transfer of expenses to the main capital account (Tolunay et al., 2005). There was an understatement of operating expenses and capitalized costs were treated as investments. There are specific ethical violations in accounting practices that were done by Enron and WorldCom since in the year 2000, Enron had started showing financial difficulties and problems. CEO Jeffrey Skilling did one of the ethical violations, as he had formed a method of concealing and hiding some company operations and financial losses it incurred. This was referred by financial analysts as Mark-to-Market accounting (Seabury, 2011). As Seabury (2011) highlights, it is a method used in trading securities by the determination their actual value at the current moment. This method is considered as not suitable for conventional businesses. The second violation tha t led tocollapse of Enron was issues of corporate governance (Jickling, 2002, p. 4). This was caused by a conflict of interest between the executives and the company. For example, Andrew Fastow, the Chief Financial Officer (CFO), had made a deal with Enron by partnering with it to do business. In these transactions, the CFO concealed losses and debts which were acrued by Enron. Hence, this had a significant impact on the reported Enron profits (Jickling, 2002). The third ethical violation in accounting practice by Enron was referred to as Accounting issues (Jickling, 2002). This was due to the fact that Enron recorded cancelled contracts and projects as assets in its books and did not indicate which ones were cancelled. In accounting issues, Jickling highlights, Enron used derivative to manipulate accounting figures, and this was an ethical violation of accounting ethics. The fourth ethical violation was pension issues whereby, Enron’s employees held large percentage of stock . The last violation of ethics was in financial audit. The firm’s auditor used careless standards in auditing Enron due to a conflict of interest over the fees they levied for their services. They used unrealistic payment ratios which generated controversy as to whether they were taxable or not. On the other hand, WorldCom had also violated some ethics in accounting practice. Tolunay et al. (2005) highlights that there were three ethical violation

Board of directors Essay Example | Topics and Well Written Essays - 250 words

Board of directors - Essay Example Some of these challenges include deficiency in the development of adequate Islamic markets for financial and cash investments. There is also an absence of capital structures of investment in Kuwait financial system since weak asset and liability management system curtailed with deficiency of risk management policies. Kachel and his co-authors further add that Kuwait lacks a flexible liquidity market which could allow Islamic Financial Institutions. Due to weak governance factors, the global financial crisis of 2008 had affected Islamic Financial Institutions. The formation of Capital Market Authority in Kuwait led to the prohibition of money laundering, license for stock markets were introduced, foreign funds were regulated, set up of investment standards, use of Arabic language, market fee was set at KWD 50,00 for foreign investment, purchase of credit regulation was introduced. In conclusion, the introduction of Capital Markets Authority, in Kuwait, has significantly improved its governance of its Islamic financial institutions. This has been achieved through several sound regulations, which keep its corporations either owned by the government or foreign in

Common sources of success or failure of startup firms Essay

Common sources of success or failure of startup firms - Essay Example While it is important for the economy to have influx of new, innovative, and entrepreneurial companies the actual success rate of new companies is dismally poor. In fact, 90% of all new companies launched in the UK will fail within the first two years (ibid). There are proven strategies and models available which can help increase success and growth rates, and one such strategy consists of thinking the project through and preparation of a business plan. 'Perhaps the most important step in launching any new venture or expanding an existing one is the construction of a business plan.'(Barrow et al, 2001:6). Although a business plan has several purposes and target audiences, most are produced with a limited view of enabling the raising of finance. Raising finance is critical for the success of the venture and 'the business plan is the minimum document required by a financing source' (Kuratko and Hodgetts, 2001: 289). More than three-quarters of business angels require a business plan before they will consider investing (Mason and Harrison, 1996). However, at the core of a successful enterprise is a planning and control effort that must recognise the needs of the venture and reduce it to a plan for systems that will help monitor and control execution as well as to milestone progress, or lack of it. Uncertainty and change are the norm and a successful business plan must have the inbuilt flexibility to manage change and meet exigencies that arise during the course of operations. This report looks at the most common reasons for failure of start-up businesses and this is used to inform suggested strategy for the preparation of a good business plan. A plan that will address not only the need of submission to banks and potential investors but also to the other audience, such as suppliers, distributors, major customers etc. Above all it will guide decision making in new ventures and lay a clear path to be followed for success of the new venture. This study limits itself to small and medium sized enterprises. Success and Failure While success is easy to understand, i.e. it implies that the projections of performance have not only been met but may have been exceeded as well. Definition of failure is more difficult and has been variously defined as discontinuance of ownership' of the business (Williams, 1993); discontinuance of the business' itself (Dekimpe and Morrison, 1991); and bankruptcy' (Hall and Young, 1991). In the following passages we explore what fundamental causes help a newly started business flourish and conversely what are the main reasons for failure. Different authorities have analysed the prime reasons for success and failure of start-up ventures. Quantitative studies by Lussier and Corman (1995); Everett and Watson (1998); Lau and Boon (1996); Lussier (1996); and Van Gelderen and Frese (1998) (quoted in Riquelme & Watson, 2002) have been used to formulate the reasons for the failure of new business ventures. The primary reasons are placed in a tabulated format as an appendix to this report. The highlights of the findings of the studies cited are discussed briefly below. The most important criterion appears to be the managerial team. For example, Macmillan et al (1985) conclude that the quality of the entrepreneur ultimately determines the investment decision of venture capitalists, notably a thorough

Thursday, October 17, 2019

The History of eBAGS Essay Example | Topics and Well Written Essays - 250 words

The History of eBAGS - Essay Example These stages of the new venture toward expansion would be vital in ensuring success. The logical first steps should involve a study of the purchasing attitudes of the consumers. Since the company is an internet-based retail company, it would only be sensible to research on whether or not internet shopping is a prevalent practice and to what extent, if not, then what measures would entice them to practice it. Furthermore, an analysis of the brands that have the strongest customer loyalty should be a main concern for the team. These brands, if not yet in the inventory of eBAGS’ numerous brand offerings, must be made suppliers ideally. This will make it easier for consumers to identify and trust the company. The history of eBAGS has made it a force to be reckoned with. Its first-year record was an â€Å"average monthly sales growth of 98% had broadened their product offering from six to fifty-six brands† (Schroeder, Goldstein & Rungtusanatham 2011, p.507). currently, it is in dire need of new ventures to safeguard continued growth and adopting a new business model has become imperative. Their entry to the European market would be a promising new move that could yield positive results and increased profit.

Regulatory Requirements Essay Example | Topics and Well Written Essays - 250 words

Regulatory Requirements - Essay Example vidual to be allowed to fly the unmanned aircraft, they need to undergo private pilots training, acquire an operator’s license, have authorization from FAA and also have a bit of experience in unmanned aircrafts due to the work they carry out. This process of certification is determined by the FAA. UAS pilots are more educated and require more experience in flying because the kind of work they carry out is official (research, survey and even law enforcement) unlike the operators who only fly for recreational purposes and hence only require basic flying skills and an operator’s certificate. A UAS pilot requires pilot certification on top of the operator’s certificate and should be approved by FAA. Both the UAS pilot and the operator require basic flying skills and a certificate before they can allowed to fly despite them having different chores. An operator does not however require skills to operate the radio-controlled model. On the other hand, a UAS pilot requires a private license of a pilot, aviation knowledge and even skills specific to flying the unmanned aircraft. An operator is regulated to fly only model aircrafts and not any other aircraft that has more power or is more complicated than that as they lack skills. They are mandated to only carry passengers requiring educational or recreation trips and nothing else. As for the UAS pilots, they take their orders from FAA and should fly only on areas that are unpopulated unless given special approval by the

Wednesday, October 16, 2019

Common sources of success or failure of startup firms Essay

Common sources of success or failure of startup firms - Essay Example While it is important for the economy to have influx of new, innovative, and entrepreneurial companies the actual success rate of new companies is dismally poor. In fact, 90% of all new companies launched in the UK will fail within the first two years (ibid). There are proven strategies and models available which can help increase success and growth rates, and one such strategy consists of thinking the project through and preparation of a business plan. 'Perhaps the most important step in launching any new venture or expanding an existing one is the construction of a business plan.'(Barrow et al, 2001:6). Although a business plan has several purposes and target audiences, most are produced with a limited view of enabling the raising of finance. Raising finance is critical for the success of the venture and 'the business plan is the minimum document required by a financing source' (Kuratko and Hodgetts, 2001: 289). More than three-quarters of business angels require a business plan before they will consider investing (Mason and Harrison, 1996). However, at the core of a successful enterprise is a planning and control effort that must recognise the needs of the venture and reduce it to a plan for systems that will help monitor and control execution as well as to milestone progress, or lack of it. Uncertainty and change are the norm and a successful business plan must have the inbuilt flexibility to manage change and meet exigencies that arise during the course of operations. This report looks at the most common reasons for failure of start-up businesses and this is used to inform suggested strategy for the preparation of a good business plan. A plan that will address not only the need of submission to banks and potential investors but also to the other audience, such as suppliers, distributors, major customers etc. Above all it will guide decision making in new ventures and lay a clear path to be followed for success of the new venture. This study limits itself to small and medium sized enterprises. Success and Failure While success is easy to understand, i.e. it implies that the projections of performance have not only been met but may have been exceeded as well. Definition of failure is more difficult and has been variously defined as discontinuance of ownership' of the business (Williams, 1993); discontinuance of the business' itself (Dekimpe and Morrison, 1991); and bankruptcy' (Hall and Young, 1991). In the following passages we explore what fundamental causes help a newly started business flourish and conversely what are the main reasons for failure. Different authorities have analysed the prime reasons for success and failure of start-up ventures. Quantitative studies by Lussier and Corman (1995); Everett and Watson (1998); Lau and Boon (1996); Lussier (1996); and Van Gelderen and Frese (1998) (quoted in Riquelme & Watson, 2002) have been used to formulate the reasons for the failure of new business ventures. The primary reasons are placed in a tabulated format as an appendix to this report. The highlights of the findings of the studies cited are discussed briefly below. The most important criterion appears to be the managerial team. For example, Macmillan et al (1985) conclude that the quality of the entrepreneur ultimately determines the investment decision of venture capitalists, notably a thorough

Tuesday, October 15, 2019

Regulatory Requirements Essay Example | Topics and Well Written Essays - 250 words

Regulatory Requirements - Essay Example vidual to be allowed to fly the unmanned aircraft, they need to undergo private pilots training, acquire an operator’s license, have authorization from FAA and also have a bit of experience in unmanned aircrafts due to the work they carry out. This process of certification is determined by the FAA. UAS pilots are more educated and require more experience in flying because the kind of work they carry out is official (research, survey and even law enforcement) unlike the operators who only fly for recreational purposes and hence only require basic flying skills and an operator’s certificate. A UAS pilot requires pilot certification on top of the operator’s certificate and should be approved by FAA. Both the UAS pilot and the operator require basic flying skills and a certificate before they can allowed to fly despite them having different chores. An operator does not however require skills to operate the radio-controlled model. On the other hand, a UAS pilot requires a private license of a pilot, aviation knowledge and even skills specific to flying the unmanned aircraft. An operator is regulated to fly only model aircrafts and not any other aircraft that has more power or is more complicated than that as they lack skills. They are mandated to only carry passengers requiring educational or recreation trips and nothing else. As for the UAS pilots, they take their orders from FAA and should fly only on areas that are unpopulated unless given special approval by the

Odysseus analysis Essay Example for Free

Odysseus analysis Essay Odysseus himself, Pheidon said, had gone to Dodona to find out the will of Zeus from the great oak-tree that is sacred to the god, how he should approach his own native land after so long an absence, openly or in disguise. So he is safe and will soon be back. Indeed, he is very close. His exile from his friends and country will be ended soon; and you shall have my oath as well. I swear first by Zeus, the best and greatest of the gods, and then by the great Odysseus hearth which I have come to, that everything will happen as I foretell. This very month Odysseus will be here, between the warning of the old moon and the waxing of the new. Through Pheidons point of view, this passage illustrates Odysseus return to his homeland of Ithaca, which is near the end of his journey in Homers book of The Odyssey. Also, this passage shows the relation Odysseus had with the Greek gods, notably the almighty Zeus. In those days, to seek advice from the goods only few had the privilege of doing. This shows that Odysseus was heroic and important in those days. The next passage which exemplifies Odysseus journey is when he first reveals himself to his loyal supporters, Philoetius, and Eumaeus in his home country after 20 years: (Book 21, page 282, lines 200-206) Father Zeus, the cowman said, hear my prayer. May some power lead him home! Youd soon know my strength and the power of my right arm. And Eumaeus added a prayer to all the gods that the wise Odysseus might see him home again. Odysseus, thus assured of their genuine feelings, said: Well, here I am! Yes, I myself, home again in my own country in the twentieth year after much suffering. This passage is significant in Odysseus journey, because this is the first time he is revealing his heroic identity. Odysseus was looking for companions to fight alongside him against the suitors, but he first had to find his loyalists. After Philoetius and Eumaeus genuinely showed their gratitude for Odysseus, he finally reveals himself after 20 years. The next passage takes place during a conversation between Penelope and Odysseus. After a period of 20 years separated from each other, the two finally have time to converse. Odysseus starts with his heroic victory over the Cicones: (Book 23, page 308, line 310-313) He began with his victory over the Cicones and his visit to the fertile land where the Lotus-eaters live. He spoke of what the Cyclops did, and the price he had made him pay for the fine men he ruthlessly devoured. In this passage, Odysseus describes his heroic journey to the fertile land where the Cyclops lived. He then explains about how he made the man-eating Cyclops pay for what they had done. This passage really sheds light upon Odysseus heroic side, because he acted for the good of men on that journey. Part 2. Risk taking: Odysseus is first to act when hunting a pack of savage boars. (When he obtained his infamous scar on his leg. (Book 19, page 261, lines 446-450) . Odysseus was the first to act. Poising his long spear in his great hand, he rushed the forward, eager to strike. But the boar was too quick and caught him above the knee, where he gave him a long flesh-wound with a cross lunge of his tusk, but failed to reach the bone. Trusting: Odysseus out Philoetius in charge of his estates cattle, which proves that he put trust into his true friends. (Book 20, page 271, lines 209-211) . Odysseus, that marvellous man who put me in charge of his cattle in the Cephallenian country when I was only a youth. Courageous: When Odysseus had travelled to Telepylus, the Laestrygonians had destroyed his fleet, and all his fighting men. Odysseus then had to escape alone on the black ship. . Next he told how he came to Telepylus, where the Laestrygonians destroyed his fleet and all his fighting men, the black ship that carried him being the only one to get away

Monday, October 14, 2019

Sterilization by Saturated Steam | Experiment

Sterilization by Saturated Steam | Experiment Introduction Many microorganisms are non-pathogenic and can live in harmony with humans as they do not cause disease. However pathogenic microorganisms can be deadly and therefore need to be eliminated from certain environments. These environments can be hospitals; individuals are already unwell and their immune systems are compromised making them susceptible to infection, water treatment, food and pharmaceutical production; supply available to communities making everyone susceptible, and laboratories; contamination of microorganisms can cause conflicting results. In order to eliminate microorganisms, sterilization of equipment, hospital supplies and production sites are necessary. Sterilization process may involve different methods using heat sterilization, radiation sterilization, filtration, and chemical sterilization. Radiation involves sterilising using gamma waves or ultraviolet light. Chemical sterilization involves using toxic chemicals such as ethylene oxide to sterilise equipment. Filtration sterilises by filtering out microorganism residues from gases and liquids that are sensitive to heat, making them unsuitable for heat sterilization (Goering et al., 2007). Heat sterilization is classified under dry heat and moist heat. Dry heat involves using heat to sterilize by causing denaturation of proteins and oxidative stress onto the cell (Goering et al., 2007).. Moist heat involves using heat and liquid to destroy microorganisms. The most common sterilization method is the use of moist heat in steam sterilization. Steam is considered an easy and effective sterilant, as it is economical, fast working and is harmless to users. Steam is non toxic and economical as it is simply pressurised water in gas phase. Steam sterilization is a fast working process as steam production does not consume a lot of time and high pressure allows exposure to the entire compartment quickly. Steam sterilization is an effective process as it can destroy living microorganisms and at high temperatures it can prevent regermination by destroying endospores as well. Steam sterilization acts by denaturing proteins within cells thereby killing the microorganism. Water vapour releases large amount of heat during condensation, this heat allows penetration of endospores to occur thereby killing endospores. The steam steriliser works using gravity and is therefore often called a gravity sterilizer. The steam sterilizer can have steam be generated from external source or can be produced from a water reservoir internally. Initially the water from a water reservoir or steam from external source enter the steriliser and is heated using a heating element. The steam being produced rises to the top of the chamber leaving cooler air at the bottom. There are drains at the bottom of the autoclave so the cool air can exit the compartment. As the steam fills the steriliser the thermostatic steam trap located at the bottom of the compartment closes. This allows the pressure of the system to build up causing high pressured steam. The timer begins at this point measuring the time set for sterilisation. To maintain the temperature and pressure at set point the heating element turns on and off. After the set time has finished the steam can be removed either to the water reservoir to cool and allow water to condense and be collector before venting to the room, or can be vented straight into the room or a designated safe zone (Dondelinger, 2008). Problems may occur in steam sterilization where it may not work. This can be due to a variety of technical problems such as leaks in the steam line. To monitor the function of steam sterilisers a Sterikon ® plus Bioindicator vial is added to every batch. Sterikon ® plus Bioindicator is made of essential nutrients needed for bacterial growth including sugar, Bacillus stearothermophilus spores and a pH indicator. In a working steriliser these pores should be destroyed in steam at temperature of 121 °C and pressure of 1 bar (VWR, 2002). When all the pores have been killed the vial should stay a pink/red colour. However if the sterilization did not work, in the next 24 hours the B. stearothermophilus spores within the incubated vial will get the opportunity to regerminate. The growth of B. stearothermophilus is facilitated by sugar fermentation producing acid. This acid causes the pH indicator to change colour to yellow and due to the microbe growth the vial will become turbid. (VW R, 2002). This provides an understanding if the steam steriliser is working to safe conditions and helps keep everything sterile. Another method to monitor steam sterilization is the use of Thermalog strips. Thermalog strips are made of two different outer layers, one side is made of foil and the other made of paper, this paper side allows steam to enter. Within these outer layers there is a chemical enclosed with a paper indicator. This chemical liquefies when steam and heat reaches it allowing it to flow along the paper indicator. The length this chemical moves is dependent on the time of exposure to steam, the temperature of steam and the volume of steam (3M, 2010). On the paper side there are two boxes labelled unsafe and safe. If the steam sterilisation occurs properly the chemical will move into the safe window of the strip. However if it does not there must have not been enough steam produced, not high enough temperature or not enough time within steriliser. This experimental report addresses the necessities needed for complete steam sterilization and producing safe equipment. In order to understand the requirements needed for steam sterilization, the experiment is conducted using different methods and conditions for B. stearothermophilus spore strips. The experiment is important as steam sterilization has important applications in preventing spread of disease within the community by sterilising medical equipment and giving reliable results by sterilising laboratory equipment. Hypothesis: Moist heat may be more effective than dry heat in sterilization process as moist heat plays a substantial role in sterilising spores. Steam sterilization is the most used method of sterilization yet its affectivity may be dependent on specific operation conditions. Steam sterilization needs to be monitored as problems may arise with its function, determine these methods of monitoring steam sterilization process. Materials and Methods: Refer to: BMS2052 Microbes in Health and Diseases Practical Class Notes (2010), Department of Microbiology, Monash University. Pages 35 -37. Results: Results 1.1 Thermalog strips were placed in Schott bottles, one with water and loose cap and the other tightly capped with no water added. After 15 minute sterilization at 121 °C the Thermalog strips read either safe or unsafe in relation to microbial presence. Results 1.2 Two bioindicators, initially pink, were separated one underwent steam sterilization and the other had no sterilization. After incubation for 3 days at 56 °C the bioindicators colours were recorded. Results 1.3 All four screw-capped bottles had one strip of B. stearothermophilus spores inside. These four bottles underwent different conditions, e.g. underwent steam sterilization or had liquids added. All these bottles underwent incubation for 3 days at 56 °C. Discussion Steam sterilization experiment shows the affectivity of steam sterilization, the operation conditions and monitoring the process using Thermalog strips and Sterikon plus Bioindicator vials. In order to determine the requirements needed for steam sterilization Thermalog strips are used to measure affectivity of steam sterilization. In the experiment the Schott bottle with water that was loosely capped had a reading on Thermalog as safe. This is due to steam having direct contact to Thermalog strip as water inside the Schott bottle vaporises when inside steriliser and the loose cap on the bottle allows steam to enter during sterilization. However the other Schott bottle that has no water and is tightly capped has a reading on Thermalog strip as unsafe. The Thermalog strip remains in the unsafe window as it has not had enough contact with steam as the cap was tight thereby not allowing steam from the steriliser into the bottle and there was no water within the bottle so steam could not be produced within the bottle either. Thereby this shows for complete sterilization to occur there needs to be direct contact between equipment being sterilised and steam, a high enough temp erature and enough time in the steriliser, all these properties are monitored by Thermalog strips. Thermalog strips are affective at monitoring temperatures and time exposure to steam yet it does not prove that say heat resistance pores will be destroyed at the specific conditions. Therefore Thermalog strips should be used but in combination with other monitoring items. Steam sterilization monitoring can also be done with Sterikon ® plus Bioindicator vials. This experiment shows how the Bioindicator vials work and how effective they are at monitoring the process. Bioindicator vials have B. stearothermophilus spores in a nutrient broth with a pH indicator. Initially both these vials appear to be clear and pink in colour. The Bioindicator vial that is placed in the steriliser stays pink and clear whereas the vial that was not sterilised became cloudy and yellow. This means that the Bioindicator vial sterilised has no bacterial growth, as regermination has not occurred while the vial not steam sterilised did have regermination. Regermination of spores allows formation of bacteria. These bacteria facilitate their growth by fermenting sugar. This fermnattion process generally procuces acidic end products, family of Bacillus do mainly produce lactic acid as an end product. As these products are acidic the pH indicator will change colour in respose to th e formation of these products. The pH indicator changes colour from pink to yellow. The bacterial growth will also cause the vial to look cloudy due to turbidity within. The results showed the Bioindicator vials work consistent with what was expected showing that they are an asset in monitoring steam steriliser function as they show Monitoring the needs to facilitate complete steam sterilisation occurs in the third part of the experiment. Bottle 1 is used as the control showing that the B. stearothermophilus spores have the ability to regerminate from the initial spore strip. If bottle 1 had not shown microbe growth the results obtained would not prove steam sterilization has occurred as the spores may not have had the potential to regerminate at all. Bottle 2 shows that steam sterilization can occur when water is added to the bottle. As the heat within the steam steriliser increases the water within the bottle will vaporise forming steam. This steam will have direct contact with the spores allowing the spores to be completely eradicated. Bottle 3 was tightly capped and had no liquid added to it making it impossible for steam to have direct contact with the spore strip. As the spores were still alive during incubation the spores regerminated and formed bacterial growths within bottle 3, viewed as cloudy. Bottle 3 as it had no contact with steam had only dry heat sterilization working within which is not effective in killing of spores and thereby is less effective than steam sterilization method in bottle 2. Bottle 4/5 was tightly capped and had paraffin oil added to it. It would be expected that this bottle would have bacterial growth as there is no steam in direct contact with the spore strips. The oil could even act as a barrier for any steam, entering through the tight cap, to get in contact with the spores. However the results obtained in the experiment showed that there was no bacterial growth in bottle 4/5. This is most likely due to experimental errors where the spore strip was not completely submerged in paraffin oil and the cap of bottle 4/5 was not tight enough. This would allow steam to enter the bottle and have direct contact with the spore strip as the oil was not covering the whole strip. This experiment showed that for effective steam sterilisation to occur the equipment and instruments must have direct exposure to steam. Steam sterilization experiment has showed that for steam sterilization to occur direct contact with steam is needed; this can be from direct steam from steriliser or water within vaporising. Steam sterilization experiment could have included a few more alternative conditions such as a loosely capped bottle with no water and a loosely capped bottle with oil. This would have showed steam can enter a bottle and cause sterilization. Also a loosely capped bottle with oil would have been able to tell the effect of oil on direct steam sterilization. Steam sterilization is a more effective and time efficient process than dry heat sterilization techniques. Steam sterilization can manage to kill heat resistance bacterial spores whereas most dry heat sterilization cannot. There is a dry heat sterilization method that is effective in killing bacteria regerminating from spores called Tyndallization. Tyndallization involves heating equipment and instruments for a certain time ranging from a few minutes to an hour depending on temperature of heating for three to four days. Initially this will kill all existing bacteria and other microorganisms. On the second day the spores would have regerminated allowing the second row of bacteria to also be killed. The third day will allows time for the late germinating spores to regerminate and heating allows them to be killed (Aminot and Kerouel, 1997). This procedure despite its affectivity this procedure still takes several days to complete therefore steam sterilization is the better option. Sterilization is an important process in hospitals, water treatment facilities, food and pharmaceutical production and laboratories. In hospitals sterilization can prevent the spread of diseases caused by opportunistic pathogens such as Staphylococcus aureus, Escherichia coli, Pseudomonas aeruginosa and Klebsiella pneumonia (Goering et al., 2007). Steam sterilization is therefore an ideal form of sterilization in hospitals to prevent spread of disease with the aid of Bioindicator vials to monitor function in every batch and occasional use of Thermalog strips. Conclusion Steam sterilization can only occur if the equipment being sterilised has direct contact with steam from steam provided in steriliser or from heat causing water within to vaporise into steam. Without steam contact the equipment is having only sterilization by heat which is an ineffective sterilization method on spores. Oils, fats and other hydrophobic substances should cause barriers for steam penetration making sterilisation less likely. It is important to monitor steam sterilisers as many mechanical interruptions could prevent complete sterilisation. Sterikon plus Bioindicator vials are an effective way to monitor steam sterilisers as they produce consistent results showing whether sterilisation has occurred or not. Thermalog strips can also be used to monitor if steam sterilising machines are reaching conditions that allow safe sterilisation to occur, for example the right amount of steam, temperature and pressure.