Race to the top

The Race to the Top is a grant provided to support the new approaches to improving schools by the American and Reinvestment Act of 2009. They take the form of competitive grants to reward and encourage states to create education environments based on innovation and reformation.

Both the governor of Georgia and the school superintendents have a responsibility towards ensuring that these funds meet their purpose. Both Governor Nathan Deal and state school superintendent John Barge were not at the helm when the vision for Georgia was not being created but are vital in seeing to it that it is achieved. The governor being the chief executive of the state of Georgia is responsible for ensuring the financial allocation aimed at this initiative are submitted in full and in time. The Governor’s Office of Student Achievement (GOSA) which is a representative of the governor in this project monitors and outlines the yearly plans of implementation. GOSA is responsible for meriting the different factors in education that should be rewarded in various capacities that relate to the initiative. The state superintendent has administrative duties when it comes to the implementation of the fund. They are answered to by the school superintendents who are the administrators of the schools. The state superintendent has an advisory role on the rewarding and use of the funds by various individuals and agencies.

Common Core Georgia Performance Standards (CCGPS) was adopted by Georgia in July 2010. The governor is of the view that the initiative creates a better purchasing power since the similarity in curriculum between many states will lead to instructional materials and textbooks being developed for one target hence lower prices. He feels the initiative relieves the taxpayers on education and hence improved saving and subsequently investment by the taxpayers. The standards are useful to the school state superintendent in that his work with other stakeholders towards achieving the goals are facilitated by the uniform and clearly provided guidelines. Since it is in line with the standards of the other states, the superintendent can easily evaluate progress and performance of the strategies of implementation in Georgia by comparing them with the successful states.

The initiative has faced resistance from stakeholders holding the opinion that the more federal control is likely to harm education in Georgia. Some feel that the common core was created by federal state of Georgia for the benefits of the government and not the education sector. The initiative however is in all other states and that school of thought remains false. Some feel the standards will be lowered especially in mathematics where Georgia has always lagged behind thus creating the resistance. It has been agreed that through the initiative, no state is to lower the standards. Some teachers have been reluctant as they feel common core standards tell them how to teach which is not factual.

The competitive nature of the initiative may leave other students frustrated and not meet the intended student based learning. Ascertaining the student and institution for rewards is does not have an all-inclusive mechanism. Different institutions have different resources. Making them compete on the same platform does not clearly indicate the initiative is achieving its goals. The diversified natures of the current classrooms possess a great challenge in determining progress and rewards. The abilities and achievements of individual students should not be evaluated on a standard scale since they are different in various aspects.

Whitfield county school was given the grant for innovation. The grant is aimed to expand the Beyond the Classroom projects as per the principal, Dr. Judy Gilreath. Basing on literacy and wellbeing, the project aims at improving reading scores of children from birth to the age of eight years. The grant is intended to expand lunches and learning academies in the communities to six or more. Resources will be channeled to addressing health, family and emotional related matters. The funds will be used to attract more partner agencies to push together with the likes of Get Georgia Reading Campaign and Readers to Leaders.

In order to see success of the initiative, the states should build more effective communication and collaboration among the stakeholders. More competitive grounds and funds should be established to address various aspects of quality education. The department of education should not back away due to the additional powers on state school superintendents but should continue to exercise their power to monitor and hold every individual accountable in regards to the use of the funds.

More than Just the Race

In his work “More Than Just Race,” Julius Wilson illustrated two crucial factors which influence racial group outcomes which are based on the culture and social structure. Through the research of others and his own personal research, Wilson demonstrated how the two factors influence the life in the ghetto, dissolution of the black nuclear family and plight of black males (Wilson, 2009). The work shall illustrate how both cultural and structural forces directly contribute to racial inequality in the society.

According to Wilson, National views and beliefs are more likely to influence racial inequality in the society. In addition, cultural traits such as the shared looks, traditions and behaviors are much likely to influence the way people view each other in the cultural arena. On the other hand, the structural factors such as: the behavior of the individuals in the society such as stigmatization, stereotyping, workplace and educational discrimination are the major structural factors likely to influence the way people live and associate with themselves (Mitchell-Dix, 2015). Other structural factors which might influence the society integration might be influenced by structural factors such as: institutional practices, policies and laws. Both the structural and cultural factors have great impact upon the lifestyle of most people in the society because they influence the level of poverty, nuclear family break downs and joblessness.

Basically, according to Wilson it is more than race cause society inequality in United States. The views and beliefs of the larger society would lead the society to believe that the blacks are culturally poor. Nonetheless, the structural factors such as social and economic mobility are more likely to cause inequality among the underclass (Wilson, 2009). In conclusion, Despite the fact that, it might be hard to eradicate racial inequality in the underclass society, it may even be much harder to eliminate or mitigate the structural and cultural factors causing underclass inequality in the society.

CFD

Physical Law and Properties

The channel flows are characterized by free surface, maximum velocity below the free surface and condition walls with no-slip.  The boundary walls together with the no-slip condition allow the wall shear stress and the development of the velocity gradient to depend upon the walls roughness and the fluid viscosity.  While observing the fluid flow along the x- axis and the chancels depth along the y-axis a velocity result as shown in figure 3b is established (Ranade, 2013). The fact that the average velocity along the flow depends on the stream distance x, the velocity ends up becoming one-dimensional variable. The trait enables the longitudinal section analysis as shown by the figure 3.a in the form of two dimensions since the domain depth is smaller to have any impact. Once the configuration is attained, the fluid flow is normally considered to be two dimensional because the boundary layers developed on the walls are consistent with the ducts and the pipes.

The no-slip condition alongside the walls affects the flow of the fluid due to the fluid’s viscosity effects.  The boundary layer normally affects the region along the entrance (Steiner, 2014). The entry length is where the flow develop the velocity profile until it reaches the hydrodynamic developed region where the profile of the velocity remain unchanged as illustrated by the figure 4.

When the liquid flows along a pipe with a constant diameter, the viscous impacts normally causes the pressure to reduce because the fluid flow velocity is associated to the pressure gradient along the pipe. The result of the fluid flow is referred to as poiseuille flow or parabolic flow. The Poiseuille or parabolic equation is normally applicable to the laminar flows but is never suitable to the turbulent flows. The equation for solving the laminar plane flow velocity is usually as:

Assumptions and Approximations

The CFD model development depends on the fundamental approximation and assumptions which leads into two dimension domain. The assumptions under all conditions are based on the water entering at a constant velocity, the walls being smooth while the flow is incompressible. For the velocity of 0.1m/s, the fluid flow is considered to be laminar, while for the fluid velocity of 0.2m/s and 0.5m/s the flow is considered to be turbulent (Wendt, 2012).  In order to develop CFD simulation in the geometrical model, the prior mentioned dimensional slice of the longitudinal section are usually converted into 3 dimensional domains without mainly affecting the flow. The relative depth of the experiment must be about 0 to 20 times less than the base and the height of approximately 0.02m. Moreover, the effective depth for the calculated value is 0.002m.

Theoretical Calculations

Reynold’s number

The Reynold’s number in equation one was used to establish whether each flow was turbulent, transitional or laminar. The figures of the velocity in table 2 reveal that the flow is laminar, turbulent and transitional respectively. As the simulations were being conducted, the flow regimes were calculated in order to show that they were reviewed and indicative.

Laminar Poisueille Flow solution

The laminar Poiseuille flow equation was employed in solving the axial velocity for the laminar flow in order to attain are fined solution, the y-axis values were calculated from 0 to 0.02m in additions of 0.001m thus resulting into 20 points of data. Excel was used to conduct the calculation while the results were indicated in Appendix 1.  Once the data is obtained, the results were plotted were plotted on the y axis while the observed height of the x-axis as illustrated by figure 5 to 7. Despite the fact that the poiseuille flow equation was intended for the laminar flow the other flow conditions were solved using the reference.

The three graphs indicates that the velocity profiles for the entire flow condictions share similar parabolic traits which is expected when U∞=0.1m/s and possible when u∞=0.2m/s when calculated to have a Reynold’s number transitional. Nevertheless, U∞=0.5m/s was calculated as a turbulent. Hence, the calculation was most likely to be inapplicable for the flow. Give the laminar flow regime is for the analytical solution, the initial validation and grid convergence will be made depending on the solution of the laminar flow of 0.1m/s.

Modeling

Process Overview

The Computational Fluid Dynamics, (ANSYS Academic R16.2. Fluid Flow) was used to model the flow of water in the channel. The approach involved creation of the geometry from the subsequently and physical model for determining the sufficient mesh in establishment of the converged solution (Wendt, 2012). The initial and boundary conditions must be established in the set up prior to the performance processing. The CFX modeling overview is present in figure 8. The CFX solution manager is used to obtain the solution where the convergence may be verified or observed. As illustrated in the diagram below, the refinement mesh may be performed using the quality and accuracy of the result. The moment the expected solution is attained the results may be exported and viewed in CFD-Post for processing.

 

Geometry

The geometry was established using the built-in modeling tool of the Ansys CFX which is the design modeler. The 2cm*1m rectangle created the workspace in the fluid travelled in the x to y plane. The symmetry in the z direction was about 0.002 in order to enable discretization of the flow domain.

Mesh

In order to analyze the multiple nodes, the Fluid Flow (CFX) was used to create grid pattern mesh. By default, due to the low number of nodes and lack of nodes in the free stream automatic mesh was used (Wendt, 2012). In addition, there were inadequate data points in the mesh which would generate the solution. The Mesh refinements were performed by increasing the accuracy and quality of the grid. Figure 10 to 13 represent the development of the grid refinements.

 

Using the inlet velocity of 0.1m/s, a simulation was performed in order to obtain to obtain a solution for comparison with the laminar profile. The grid resolution was later transformed from 20 to 50 divisions with each simulation results being compared against the laminar flow solution analytical profiles (Wendt, 2012). Upon analyzing the mesh with the 50 sweep divisions sufficient to perform the other simulations were identified.

The velocities of 0.2m/s and 0.5m/s inlet input were transformed in order to be aligned with the flow conditions. At U∞=0.2m/s , the simulations for both turbulent and laminar regimes were performed because the transitional region could change between turbulent and laminar. At U∞=0.5m/s where the Reynolds’s number was considered to be higher, the CFD setup was set at k-epsilon turbulence model.

Boundary and Initial Conditions

As specified by the project description, each boundary face was established in order to correspond to the conditions. The inlet speed was set at 0.1m/s while the outlet pressure was decreased at 0 pa. The boundary faces minimum and maximum in the y-axis direction were modeled so that the non-slip conditions on the walls were applied. The Nominal roughness was assumed to be 0 in the model (Wendt, 2012).The material was fixed in the water of approximately 100C, dynamic velocity of 1.307 *103pa/s and density of 1000kg/m3.  For the case of 0.1m/s flow of the laminar model and k-epsilon turbulence model applied both 0.2m/s and 0.5m/s. the maximum initials were set at 500while the convergence criteria was set at 1*10-8.

Processing and Convergence

After mesh, geometry and setup was done, each model was conducted the quality of each was performed as illustrated by the converged presentation in figures15 to 17.

 

Post Processing

When the post processing vectors were being established, data visualized the flow velocity throughout the channel length. The data velocity was used to prepare graphs points of 0.1m, 0.5m and 0.9m respectively along the channel. The information collected was compared and analyzed  the theoretical calculations.

Data and Model Analysis

Mesh Refinement

The laminar flow of 0.1m/s flow condition was the validated model which was used in the simulations. The diagram below validated the model.

 

 

 

The generated automatic model mesh is displayed by figure 10, it displays the inadequate number of divisions which produce accurate fluid flow representation through 1m*0.02m*0.0001m rectangular pipe.

 

Comparison of results to theoretical

Discuss the effect mesh refinement has on the results

The comparison and analysis of figure 18 and 19 assists in determination of the impact of grid refinement on the results accuracy. Figure 18 illustrate sharp points at for both 0.5 and 0.9m measurement points.  These results illustrate that the mesh usually have inadequate refinement (Xiao, 2016). On the other hand, figure 19 demonstrates de-void sharp points sufficient for the correct fluid model.  Figure 15 and 16 illustrate convergence and RMS error values for the respective meshes. Therefore, 50 sweep meshes have the potential to offer fundamental improvement to the axial velocity comparisons while minimizing improvement to the convergence.

Discuss any observations about the boundary layer

After observing the figures, 15, 16, 17, 19, 20 and 20 the effect of the inlet velocity upon the layer boundary can be determined.  Figure 19 demonstrates laminar traits as the velocity of both 0m abd 0.02m in the y direction at 0m/s. in addition, the axial velocity at 0.9 when distinguish against figure 4 emphasizes the match between the models.  Figure 18 illustrates that the velocity of the fluid on the boundary becomes stagnant hence forming boundary thickness of increased thickness (Xiao, 2016). The increasing boundary layer thickness is as a result of viscous interactions between the second and boundary layer. Therefore, both the layers interact efficiently with the third layer. The internal fluid core normally increases the velocity in accordance with the boundary layer thickness in order to satisfy mass conservation.  Eventually, the velocity returns the average profile across the channel which is no longer influenced by the flow type and boundary layer are accurately established.

Establishment of figure 18 resulted into representation of the laminar flow as shown in table 2.  The display assisted in the calculation of the Reynolds number which falls within the specified range of the laminar flow.  The figure 20 and 21 respectively displayed the velocity of 0.1, 0.5 and 0.9m which was indicative of turbulent flow at the inlet as the fluid particles proceeding along the channel (Terline, 2009). In order to establish the flow profile equation 2 and 3 must be used to explain how the channel with similar specifications and inlet velocity of 0.2m/s and 0.5m/s must have laminar of o.2 and 0.25 and have a length of 2.92m and 7.30m if the turbulent before the flow is fully established while the boundary layer  remain stable and accurate. The Reynolds number for figure 23 and 24 respectively and the state figure 16 and figure 24 is believed to be transitional and turbulent respectively. It is therefore recommended that length of the model to be analyzed increased in order to allow a fully developed analysis of boundary conditions and velocity profiles to be undertaken.

The Capital asset pricing model (CAPM)

Introduction

Capital asset pricing model (CAPM) is a fundamental financial model which describes the relationship between the expected return and the risk associated with pricing of most risky securities in the economy. The capital asset pricing model was first published in 1964 by William Sharpe. According to this model, the investors expect to be compensated through risk and time value of money of the assets chosen (Raiborn, 2009). The model is of great importance to the investors as it assists them to identify an asset with a high rate of return in order for it to be added into well diversified assets since most of the assets are considered to have non-diversified risk. The model calculates the required rate of return for asset investment based on its risk measurement. In order to attain this objective, the model uses a risk multiplier such as a beta coefficient (Anne, 2013). Just like all other models of finance, capital pricing model is usually based on assumptions. The work will present brief explanation of capital asset pricing model history, the pro and cons, analysis of new models that have been added to the CAPM to improve the data and show how the new models are completely different from CAPM.

Assumptions of CAPM

The model assumes that during the investment process there are no transaction costs and taxes do exists. It assumes that the lenders and the borrowers can lend and borrow unlimited huge amount of money at a zero risk rate (Macher, 2006). Moreover, it believes that the investors are the wealth maximizes who normally select their investment based on the standard deviation and expected return. The model assumes that there should be no restrictions to the short sales of the financial assets. Furthermore, the model believes that all the investors in the economy have the same expectations which are related to the market. The model goes ahead to assume that the quantities of all financial assets are usually fixed and given as no investors activities might influence market prices (Hoffmann, 2008). Nevertheless, some of these assumptions of the capital asset pricing model do not work in the real investment model  although some still work efficiently in selection of diversified asset which might be included into the other diversified portfolios get its rate of return.

The Advantages and disadvantages of CAPM

Capital asset pricing model is believed to have more advantages as compared to other methods which are used in calculation of the rate of return and the risk of a given portfolio to be selected and combined with the already existing portfolios. The capital asset pricing model is considered to be superior to the weighted average cost of capital in the process of providing the discounted rates of return that is to be used in the investment appraisal. It is a better approach for calculating the cost of equity than the dividend growth model because it openly takes into account the organization level of systematic risk that is relative to the stock market as a whole (Douglas, 2011). The model normally generates a theoretic derived relationship between the systematic risk and required return which is subjected to the frequent empirical testing and empirical. The model is also of great importance because it considers systematic risks which reflect on the reality of investment which the investors might depend in order to eliminated the portfolios in their investment with the highest unsystematic risks.

Just like any other model in existence, capital asset pricing model usually suffers from a number of limitations and disadvantages. The disadvantage and the limitations of this model is that in order for the model to be used efficiently and effective it would be important for the investor to assign the CAPM values to the already predetermined risk free rate of return, equity risk premium, equity beta and the return on the market (Dawson, 2004). The yield on the short term government debt that is used by the investors as a risk free rate of return is usually flexible and changes frequently according to the current economic circumstances. It is always difficult to determine the value of equity risk premium (ERP).

As a matter of fact, the return from the stock market is usually the sum of both average dividend yield and the average capital gain. In the modern times, the uncertainly of the expected return and arises because the values of Beta are flexible and changes over time. Furthermore, it is always difficult to establish a suitable proxy beta because different organizations in the economy usually undertake only a single business activity. It is also difficult to use a fixed companies Beta because their capital structure information is not readily available (Elton, 2010). Some of the companies in existence are believed to have complex capital structures with various sources of capital hence making it difficult for the investors to establish the beta and appropriate asset for investment. The model assumption of a single period time horizon is in contrast with the multiple period of investment appraisal which is considered by other models such as the weighted cost of capital.

Other new models which might be added to CAPM to improve models and data

Weighted average cost of capital (WACC) is a model which is used to calculate a company’s cost of capital where each class of capital is suitably weighted. According to the model, all the sources of capital such as the bonds, preferred stock, long term debt and common stock are used in the calculation of the weighted cost of capital (Damodaran, 2012). The model assumes that the rate of return and the beta on equity usually increases as the values of WACC increase.  In comparison to the capital asset pricing model which has a fixed beta for calculating the expected rate of return from an investment, the weighted cost of capital depend on the average rate of return which is obtained after the combination of the various cost of capital in the organization.

The assumptions of the weighted cost of capital (WACC) include: the lenders of the finances do not change their required rate of return because the results of the investment project are usually undertaken, the investment organization is considered to be larger than the investment project, the business activities of both investment project and organization are considered to be similar (Kumar, 2010). In addition, the model assumes that the financial mix and capital structure used in undertaking the new investment is similar to the existing financing structure in an organization. These assumptions suggest that WACC can be used as a discounted rate of return as long as the investment to be undertaken does not change the financial and business risk (Daft, 2010). In case the financial risk of the business or financial investment to be undertaken is different from the one of the organization investing, it is appropriate for the investors to use the capital asset pricing model in calculating the specific discount rate. WACC is of great importance for the investor in determining the appropriate project to invest in especially when an investor is on a dilemma of choosing one investment. The use of WACC model assists the investors to calculate each project specific discounted rate of return hence an appropriate way of selecting the best project for investment. Just in case the internal rate of return of a project is lower than the weighted cost of capital then, the project shall be rejected.

The investors might also use the multi Beta model in making decision in regard to the asset with the highest return as well as lower risks. The arbitrage pricing model allows the use of multiple sources of market risk to be considered while comparing them with the required rate of return from the investment and the betas of each asset are compared with each other (Wu, Hung-Yi, 2012). The multifactor model usually uses the historical data and relates it to specific macro economic factors such as the slope of the yield curve, level of interest rate and the GDP growth and determines the betas of each and every organization against these variables. The investors may also end up using the market price based models to establish an appropriate asset to invest in based on it rate of return and risk. In other instances especially in the modern times, investors are believed to select the kind of asset to invest in using the accounting information based models (David, 2014). The investors usually select the asset to invest in and calculate the accounting ratio and establish a scaled risk measurement ration to which the asset shall be based.

Conclusion

Despite the fact that there are various models which might be used by the investors in calculating the rate of return and the risks associated with the assets to be selected for investment, it is always important to select the best method which meets the investors’ expectation (Bartlett & Beamish, 2013). Capital pricing model despite being an old model in the calculation of the risk and the required rate of return for an asset, it is considered to be the best model which the investor might us in making decision on the assets to select. As compared to the weighted average cost of capital and accounting information pricing model, CAPM is seen to be simple to calculate and workable in most of the economic sectors

Biopsychology: Sleep and memory

Previous research on sleep has most times focused on the aspect of cognitive processes and memory. There is an existing gap in previous research to identify how sleep affects cognitive processes through the study of emotion. The benefits and effects of sleep would first be evident on emotion in order to affect memory. The human memory has been highly linked to motivation that is applied in day to day emotional challenges which further cross link cognitive functions together.

Miller, N. L., Tvaryanas, A. P., & Shattuck, L.G. (2012). Accommodating adolescent sleep-wake patterns: the effects of shifting the timing of sleep on training effectiveness. SLEEP. 35(8), 1123-1136. http://dx.doi.org/10.5665/sleep.2002.

In this investigation, the researchers examined the effect of sleep on soldiers receiving training. The research looked at two groups that received extra sleep each night and a control group that had less sleep. Great support revealed that the group that received enough sleep recorded an increase in emotions, positive moods, physical fitness and this group was less likely to drop out.

Evidence

The study did not find successful connection between sleep and emotion via memory. The research concluded that sleep has some small amount of effect on functioning.

Claims

The nervous system is linked to several parts of the human emotion and so an effect of sleep on the nervous system would result to an effect on human emotion.

Research design used

The kind of research used here is descriptive because it shows the strength ability of soldiers during war for those who have had enough sleep and finds a way to link sleep and memory to emotion.

Voderholzer U, Piosczyk H, Holz J, Landmann N, Feige B, et al. (2011) Sleep restriction over several days does not affect long-term recall of declarative and procedural memories in adolescents. Sleep Med 12: 170–178.

Claims

Adolescents used in this study revealed that sleep did not affect memory but affected emotions and later emotions would affect memory. Researchers concluded that sleep promotes emotion while emotions independently controls memory at tasks involved in memory.

Evidence

In this manner, the long term memory is engineered to interact with the psychology of emotions. Consequently sleep functional affects expressions, emotions and human behavior.

Reference

Choosing this reference is because of its design contribution to measuring a particular aspect of interaction between long term memory and emotion.

Wilhelm. I., Diekelmann, S., Molzow, I., Ayoub, A., Molle. M., et al. (2011) Sleep selectively enhances memory expected to be of future relevance. J of Neurosci 31: 1563–1569

Claim

Sleep improves memory. However, effects of sleep can be understood from whether an individual sleeps at daytime or during the night. The conditions of sleep set the pace for encoding and reinforcement of memory and emotion.

Evidence

Lack of interference while sleeping increases the motivation for high level cognitive thinking as well as emotional aspects. Daytime sleep is subjected to interference and this same interference acts as a possible cause of poor memory and poor emotions.

Reference

Apart from just claiming the possible causes of sleep deprivation and its effects on cognitive performance, these authors came up with a way to place their participants in a state of awkwardness to prove ethical issues in sleep deprivation.

Carskadon, M.A (2011). Sleep’s effects on cognition and learning in adolescence. Prog Brain Res 190: 137–143.

Claim

What is the exact role of sleep in social functioning, school performance and declarative memory? This was the exact question this research aimed at answering. Social function is linked to an individual’s mood therefore this research acts as a proof that performance in social functions and general class performance is hidden in emotion. The greater an individual’s emotion, the higher their cognitive performance; sleep comes in as a factor to balance emotions which later improve cognitive performance.

Evidence

The various components of the central nervous systems that are linked to emotion have been identified to have major effects on human behavior. The study included a research design that investigates on adolescents. This study fairly narrows down current perspectives of psychology and explains the evolution of human behavior with regard to sleep, emotion and memory.

Ethical issues

The researcher in this study managed to maintain ethical principles and standards that helped addressed individual participant rights to engage in the research.

Based on your topic selection and initial resources, what is your research question?

Basing on the research above, my research is ‘what relationship does sleep patterns have on emotion and cognitive performance’?

How have advancements in technology influenced research on human behavior?

Because human behavior is changing, most researchers have started to identify the role of emotion on cognitive performance. This is because most human behavior is attached to motivation which is revealed through emotions.

How have views on your topic changed over time?

Various techniques throughout history have revealed the relationship between human emotions and behavior and cognitive performance.

What conclusions can be reached, based on these studies and references?

The general idea of biopsychology is that human behavior can be influenced by any natural process that finds its root in the brain. It is the only approach that believes in the change of human behavior from the central nervous system.

World Stewart

According to Stewart, it is important to be genuine and analytical of the issues of life surrounding us. He believes that it’s important to go to the world with a deep feeling and reflection of who you are in order to establish how people define you character. He believes that it is good to leave thing better than he or she found them despite how obvious situations and circumstances might be. One must never mind what have in mind about their thoughts and opinions. The idea is different from the real world because one must think the repercussions of the opinions and thoughts before exposing them (Houpt).  Stewart supports the society in attacking all the silliness whether social, political or economical which is not always the case in the real world because some absurdities might be making sense. Both the Stewart and real world support the individuals pursuing the truth and the things which matters in the world. It is the role of each individual to act for the interest of others and those of theirs. The work demonstrates the similarities and differences of the Stewart world and the real world basing the argument on the biblical teaching.

According to Luke 8:19-21, when the mother and brothers of Jesus came to see him but they were unable because of the huge crowd (Houpt). When he was informed that his brothers and mother were waiting to see him, he replied that his brothers and mother are those who listen to the word of God and put it into practice. His answer was no absurd as the followers would view it but it was resolving the conflict of interest between his followers and the family members (Houpt). The scripture displayed conformity to the social values and Jesus in the account did not violate the local social values. In Conclusion, the people in the biblical times would react negatively to the reading but it rely teaches us to be genuine and act in trust on whatever we engage with disregarding the family relationship and ties.

Work Cited
Houpt, S. The world according to Stewart. 2009. American Politics: Canadian Perspective.

I.T Strategic Governance Controversy

According to Craig, aligning information technology strategy with the business governance has been one of the greatest hurdles for the organizations’ governance especially for the business executives and board of governance over the last few decades (Tabach, 2013). In order to eliminate the controversy that exists between the information technology strategy and the governance, it is appropriate to align the IT strategies of the business with the appropriate information technology governance. The work will demonstrate how information technology alignment strategy for the business might be controversial with the organization information technology governance.

In order for the executive and the board of governance to work effectively for the sake of making the organization to be more effective in executing its goals and objective, it should ensure that the business information technology strategies are appropriately aligned. Information alignment strategy calls for the development of mutually sustainable cooperative relationship between the business and information technology and ensuring the established relationship benefits the two parties (Isaach, 2014). Moreover, in order to hedge the organization failure, it is of great significant for the both the information technology executive or board of governance for the business to be recognized as being fundamental in the development and execution of the appropriate information technology strategies for the business. Both the IT executives and business executives should be considered equally fundamental in the development of the information technology strategies and operations for the business.

In order for both the IT and business executives to establish accurate metrics which connect the business priorities, performance and objectives with the business strategies, it is appropriate for the two evaluate organizations:

  • Return on investment
  • Total cost of ownership
  • Revenue growth
  • Customers’ satisfaction
  • Time reduction and business process cost
  • Increase the quality of the products and services
  • Speed to the market
  • Establish appropriate relationship partner between the IT strategies and executives

In order to align the organization IT strategies to the executive it is important to optimize the process, defined and manage the processes, continuously repeat the processes, initiate the processes and ensure that there is pure IT strategies functions purely in support of the business executives (Tabach, 2013). In order to avoid business fallouts, it is mandatory for both the IT strategies and business executive to work together. By both the teams working together, the business shall be in a better position to define and relate its value, focus on the complains of the customers, ensure that the IT initiatives are consistently evaluated and develop IT strategies which recognizes executives initiatives, architecture, people development, organizational operations, business measurement and financial objectives.

The revenue growth projects, cost reduction and business enablement projects are the three portfolios which must be integrated by both the IT strategies and executives in order for the business to remain in operation and successful (Isaach, 2014). Therefore, in order for an IT organization to select the appropriate investment, each IT objective must be connect to the particular business objectives selected by the business owner who shall be responsible for evaluating the business performance and objective. In order for the business executive to establish the IT strategies which best fit the organization it is important for them to analyze both the external and internal trends and pressures within which the organization operates.

In conclusion, the controversy between the IT business strategies and the executives shall be hedged against whenever the two parties shall answer these questions:  where are we, why change, what could we do, what should we do, how we get there and did we get there. Hence, through the establishment of the realistic and practical framework for reporting and measuring the results, both the IT strategies and executives governance shall be able to work together.

References

Tabach, A. (2013).Business and IT alignment, strategic/ operating planning and portfolio investment management excellence. New York Publishers.

Isaach, N.(2014)IT Governance – developing a successful governance strategy. 3rd Edition. Oxford University Press.

Public Policies Being the Source of Troubles in the Society

Introduction

According to the article, The Making of Ferguson: Public Policies at the Root of Its Troublesby Rothstein I would modify the definition of the racial segregation in United States to mean individuals in the society who have been denied their social rights in the society through: social zoning, segregated public housing, restrictive covenants and subsidization of the suburban development by the whites. I reinforce that most of the blacks in America remains poor because they are denied access to some of the economic property by the government policies in favor of the white (Rothstein, 2015). In addition, they are disadvantaged in joining the appropriate schools as a result of discrimination and segregation by the whites.  As a result they end up being unemployed hence making them remain under the vicious circle of poverty.

Structural and cultural forces in the economy

According to William structural factors are more likely to influence the black economic status than the cultural factors. I tend to amend this because there are only few families in United States which end up being poor for cultural reasons such as attending poor schools (Mitchell-Dix, 2015). Nonetheless, structural factors such as: government policies in support of the whites segregation, racial zoning, restrictive covenants and subsidization of the white are some of the major factors which encourage the whites to stigmatize the blacks in the society.

I tend to reinforce the belief that the structural forces are the most striking forces affecting the underclass pattern of schooling and hence the future level of unemployment and poverty among the underclass black community in United States. Even in the modern times, the structural forces continue to influence the public schools because they encourage them to maintain structural inequalities between the white and the black students (Wilson, 2009). I would modify the argument that the school district administrators have the power to eliminate the inequalities existing in these because I believe that government is the one responsible for coming up with the policies which eliminate cases of racism, oppression, segregation and discrimination of the children of the underclass families in the public schools.

The process of offering sponsorships and scholarships to only bright underclass should be removed from the argument. I tend to believe that in order for the relevant organizations especially the schools to eliminate poverty and increase employment for the underclass community, sponsorships should be offers to all students despite whether they have succeed or not. I tend to reinforce the argument that the school based sponsorships and scholarships programs are some of the appropriate ways of countering segregation of the underclass in the American economy (Mitchell-Dix, 2015). In addition, the government should be in the forefront to eliminate the racial zoning which might be in existence, both the underclass and the working class groups should be given equal opportunities while acquiring properties and restrictive covenants should be eliminated in the economy.

Conclusion

Despite the restrictive covenants, segregation in public housing acquisition policy, subsidization of the whites by the government and racial zoning policies, oppressive and discriminative policies in schools between the underclass and the whites have contributed to blacks being poor in the economy.

 

References

Rothstein, R. (2015). The Making of Ferguson Public Policies at the Root of its Troubles. Economic policy institute resource.

Mitchell-Dix, S. (2015). “More Than Just Race: Being Black and Poor in the Inner City.” Retrieved from: http://William Julius Wilson.

Wilson, W. J. (2009). Toward a Framework for Understanding Forces that Contribute to or Reinforce Racial Inequality. New York: W.W. Norton.

Neuroscience Research

Introduction

The basic theory under examination using the functional magnetic resonance was that 12 out of 15 subjects in the fusi-form gyrus were more active when viewed through the faces than when they were viewed through the use of the assorted common objects (Kanwisher, et al. 2007). Through the collected data for the study, the researcher may reject alternative accounts of the fusiform face area function which appeals visual attention, general processing, subordinate level classification  of any human forms or any animate hence demonstrating that the specific region might selectively be involved in the face perception. The research shall describe the theory under study, experimental control and design, the results of the study and how the hypotheses were confirmed.

Methods

The experimental design for the study involved the use of the general design where the examination of any occipitotemporal areas were used to examine the face perception specialization by looking through each subjects regions in the ventral pathway which responds more strongly on the passive viewing of the faces photographs than the assorted common objects photographs. Through the use of the general design the faces area within each specific subject are localized. The tests for the subjects’ part II and III were experienced on the basic face in relation to the object comparison of part I in order to ensure that the results of Part I would be used to generate regions of interests to be compared in part two and three. The control group for the study involved twenty subjects under the age of forty (Kanwisher, et al. 2007). Out of the 15 subjects, 6 were men or 9 were women. Nevertheless, 2 participants of the subjects were considered to be left hand while 13 participants of the subjects were right handed.

The technique used in the study was based on the stimuli sampling of the subjects of the study. The stimuli that were supposed to be used in the study were estimated to be approximately 300 times 300 pixels in size and were supposed to be gray scale photographs apart from the scrambled and intact two tone faces used in part two. The photograph faces used in part one and two were volunteers from the Harvard Vision Sciences laboratory (Kanwisher, et al. 2007). Each subjects scan lasted for a span of five minutes and twenty seconds. The stimuli sequences were generated through the use of the MacProbe software and they were recorded onto the video tape and projector respectively. The scans were conducted under the one point five T MRI Scanner.

Results

The first part of the study was to establish whether any brain areas were more active in the face viewing that the object viewing. The results revealed that the right section of the fusiform gyrus produces higher signal of intensity of the faces epochs than when the epochs of the subjects were presented. The results are visible for a single subject whose data has been smoothened.  The right fusiform gyrus was the only region of most of the subjects’ significant activation for the faces over the objects. In addition, the fusiform activations for the faces in comparison to the objects were observed in 12 out 15 fusiform regions of the subjects which were analyzed (Kanwisher, et al. 2007). The failure to see the activation regions of the faces would be difficult as a result of susceptibility artifact, insufficient statistical power and technical limitations.

There is a significant variation between the control and experimental group because the region in specific subject of the fusiform gyus responds more strongly to the faces than on the objects. Moreover, the subject’s fusiform gyrus are more strongly connected that the two tone faces which have been scrambled (Kanwisher, et al. 2007). There is also significant difference between the control and experimental group because there is always greater visual attention on the faces than the objects, there exists a different response on the animate than the inanimate objects and there still exists subordinated level classification. There exists a significant correlation between the critical variables in part one and three but there existed a difference between these two parts and part one of the experiment and control subject of the study.

Discussion

The original hypothesis of the study that which was to evaluate whether the face area was strongly intact than scrambled two tone faces was confirmed when the fusiform gyrus region of twelve out of the 15 subjects of the study were confirmed to be more stronger that the passive view of the face and the object stimuli. The study confirmed that the individual subjects identified were used as the particular region of interest within which extra tests of the face were selected and evaluated. One of the test of the study revealed that the region of interest of the face of each of the five subjects of the study responded strongly to the intact two tone faces in comparison to the scrambled versions of the same faces (Kanwisher, et al. 2007). The result ruled out the luminance variation in accounting of the face activation. The region of interests is believed to six times greater than the summation of the signal increase across the subjects’ faces in comparison to the passive viewing of the houses. The result revealed high degree of stimulus demonstration and selectivity that the face of the regions of interest do not respond to the set of subjects under the same category. The result revealed that the regions do not respond to any human images and body parts and the results generalize the response of the images of the faces which were taken from the different low level visual features from the original face images.

Evaluation

As a matter of fact the experiments were well controlled because they were categorized into three parts that is part one, two and three. The experiments address the hypothesis of the study because they evaluated how fusiform gyrus region influenced the faces of the subjects under evaluation. There is an alternative explanation for the results because through the study we came to realize that full front view photographs are stronger than the scrambled two tone faces in different set of the five subjects (Kanwisher, et al. 2007). The follow up study would be to establish whether the fusiform gyrus region may be activated by introducing the global or holistic process on the non-face stimuli and whether the response of the region faces might be attenuated in case the subjects are persuades to process the faces in a part based fashion.

Prepared Evaluation Plan for Curriculum

Background

Theoretical nursing is made up of statements which guides the nursing practice, provides directions on how nursing research must be conducted and proposes the answers to the nursing questions which might be generated during nursing in practice.  Therefore, theoretical nursing involves the use of the existing nursing theories, testing and development of the nursing theories. In the nursing discipline the theory must be the driving force across all levels of the nursing curriculum (Utley, 2010). It is of great importance for the nursing theory to guide the nursing practice and nursing research in realizing full potential for the discipline. In the nursing practice, the research, practice and theory are seen as cyclical. In practice, the nursing discipline suggests hypothesis, ask questions, generate researchable questions and conceptualize of the theory guide (Bert, 2010). As a result, the nursing theory begins and ends with the practice. The nursing theories do exists in order to challenge the existing practice, remodel the structures of the principles and rules and create new strategies to practice. Therefore, the research, theory and nursing practice normally works together to define the nursing science and more so position the nursing discipline as a professional. In order to accomplish this, the theory must be integrated in all educational programs. The purpose of the work is to demonstrate how the nursing theory may be integrated throughout the nursing education programs.

The formative and summative measures which might be used at the course level

The formative and summative assessment in the associate degree nursing level is based on the viewing the research and theory in terms of meta-paradigm. The nursing meta-paradigm shall be based on the nursing, health, environment and the person. The meta-paradigm shall be used as framework for organizing the Associate Degree in nursing curricula (Grooschner, 2012).  In the course at the degree level the nursing students are introduced into the meta-paradigm in order to enable them realize their nursing role in the society and more so apply their nursing processes. Therefore, nursing at this level is believed to be implicitly and explicitly as identified by the theory. The formative assessment measures at the associate degree nursing may include: identification of the theory based nursing practice, using the meta-paradigm in the nursing process and using the meta-paradigm to view the role of the nursing and nurses in the society (Grooschner, 2012).  On the other hand, the summative assessment measures in the associate degree nursing may include: using the research findings in connection with the nursing professional, the meta-paradigm are used in the nursing field data collection and the meta-paradigm are fundamental in identification of the researchable nursing problems.

At the Bacculaureate Nursing level the practice theory research cycle is used to articulate the ideas. The course at this level normally uses the theoretical thinking to conceptualize the components and identifies the goal of the research in terms of theory testing and building (Galluzzo, 2012). The formative assessment measures at this level may include:  the nursing practice is viewed as theory based, understands how different nursing models view the meta-paradigm and apply the theoretical thinking in the nursing process. The summative assessments measures at this level shall be: identifying the researchable clinical problems, participate in the data collection process and implementation of the findings, the practice theory research cycle assists in the selection of the data collection methods and assists the researchers in gaining access to clinical sites. The formative assessment measures in the Master prepared nursing included: the nursing practices shall be based on the nursing theory, there shall be critiques and analyzes of the nursing theories and models and leadership shall exists for use in the nursing theory in practice.  Nonetheless, the summative assessment measures for the Master prepared nursing shall include: collaboration of the research plan, appraisal of the relevant clinical findings, creation of supportive research climate and shall provide leadership for integrating the findings in practice.

The Doctorate prepared nursing formative assessment measures shall be based on the nursing theories which shall be used to design the studies, generate new nursing theories and test the evolving nursing theories.  Nevertheless, the summative practice theory research cycle shall enable the researchers to conduct researches which shall aim at generating new theories while at the same time testing them.

The evaluation measures to be utilized at the faculty level

There are different measures which shall be utilized at the faculty level to evaluate the nursing practice.  At the Associate degree nursing faculty level the meta-paradigm themes and concepts shall be used in evaluating the appropriate intervention measures which might be adopted upon the common health deviations (Osorio &Visscher, 2005).  Understanding the different Meta-paradigms in terms of the people involved, the nursing environment, the health sector and the nursing practice in general shall ensure that the theorists and the researchers their knowledge and principles to cater and handle the complex health conditions which might come on their way. The Baccalaureate Nursing level shall introduce the perspectives of the specific nursing theorists to intervene the aspects related to the disease prevention and health promotion of the groups and individuals. The moment the research meta-paradigm are realized this level ensures that the themes and the concepts are described according to expectations of each and every theorist or researchers. The fact that each meta-paradigm group has they specific needs and expectation it is the role of the researchers to design specific intervention measures for each group.

At the Master’s Prepared Nursing level, the models shall be used to evaluate the theories in practice. The researchers and theorists shall endeavor to use appropriate theoretical base to evaluate the appropriate nursing interventions required to meet the complex needs of the health programs for the groups and individuals in the society and in the nursing environment (Horen, et al. 2015).  The nursing theory under the Masters Nursing Level shall use the nursing theory shall appropriate guidelines for criticizing and analyzing the appropriate intervention measures while dealing with complex nursing situations. In the Post Doctorate Prepared Nursing level, the different theory paradigms of science shall be used to compare and contrast the appropriate approaches to the theory development. In addition, the process of theory building and the nursing epistemology shall be used to describe step by step theory building as well as develop the nursing theories based on the researches which have been conducted (Jamaeson, 2007). The researchers and the theorists shall use the theory development approaches to evaluate the effectiveness and the efficiency of the building the new nursing theories while at the same time supplementing theoretical nursing paradigm cycle.

The formative and summative evaluation measures likely to be used in the program level 4

Formative assessment measures at Post Doctorate education level Summative assessment measures at Post Doctorate Prepared Nursing level
·          Just like any other level of nursing, this level shall endeavor to generate new research nursing theories which have never been in practice previously through new researches and discoveries.

·         In order to establish whether the nursing theories and researches shall be practicable in the real nursing field, the theorists and the researchers shall ensure that they evaluate and tests the theories effectiveness in the nursing environment.

·         At this level, the scientists and the nursing theorists shall endeavor to develop systematic programs of research.

·          They shall coordinate the funded research programs in nursing.

 

·          The nursing theorists and researchers shall establish a sustainable membership that will set a tone for the theory based practice.

·         The theorists and researchers at the post doctorate level endeavor to adopt the existing theories discovered previously to design new studies for further study in nursing.

·         Through discoveries of new theories and researches the researchers and the learners shall be in a better position to generate new knowledge in the nursing field.

·         The level would enable the researchers, learners and theorists to ensure that the practice theory research cycle(Research-Theory-Practice-Practice- Theory- theory) remains in practice.

·          The postdoctoral study shall assist the nursing developers and researchers to develop systematic nursing programs and researches which shall significantly contribute towards development of the nursing science.

·          The theorists and researchers shall bring together the funded research to refine or add the theoretical knowledge in the unique nursing field.

·          The formation of the scientific nursing community shall serve as the mentors for the new nursing researchers.