Top Essay Writers
Our top essay writers are handpicked for their degree qualification, talent and freelance know-how. Each one brings deep expertise in their chosen subjects and a solid track record in academic writing.
Simply fill out the order form with your paper’s instructions in a few easy steps. This quick process ensures you’ll be matched with an expert writer who
Can meet your papers' specific grading rubric needs. Find the best write my essay assistance for your assignments- Affordable, plagiarism-free, and on time!
Posted: August 21st, 2022
INTRODUCTION
The internet has drastically changed the way we work, spend our leisure time and communicate with one another. The number of users of Internet has drastically increased over the last two and a half decades. It is estimated that the number of internet users worldwide is 3.2 billion [1]. During the initial birth of the World Wide Web (WWW) the users would access static web pages and documents. These documents would typically be in the form of Hypertext Markup Language (HTML). But the internet these days supports all kinds of dynamic media such as audio, video, mashups, etc.
We get a lot of “Can you do MLA or APA?”—and yes, we can! Our writers ace every style—APA, MLA, Turabian, you name it. Tell us your preference, and we’ll format it flawlessly.
Technological advancements in the field of computing and growth of Internet has resulted in generation of tremendous amounts of data and media. Video in particular has become an important component of the Internet user content. Video is being used as the most common way of sharing a user’s thoughts and expressions. Streaming of media such as video has experienced rapid growth over the past few years and it is expected to increase and become more significant as the internet and broadband technologies continue to improve. Today, there is a transition of streaming media to Over-The-Top (OTT) streaming. OTT streaming refers to the streaming of audio, video and other media via Internet without the involvement of satellite based direct-broadcast television systems or multiple cable operator. In OTT streaming the content is delivered through the internet from the media generation infrastructure to the client without involvement of any satellite based communication systems.
Live streaming technology is very widely used these days for broadcast of events like sports, concerts, campaigns, etc. OTT based live streaming allows users to view the events in real time and also to virtually participate in them. In OTT based streaming users can use a wide
variety of devices such as mobile phones, tablets, laptops, personal computers, etc to access the live.
To determine potability of ground water samples and its prediction has vast scope in today’s market. It can be used by the government to keep track of various viable ground water sources in the state [1]. It can be used by the pollution controls board to keep track of the levels of pollution in bore wells that contains water that is not fit for consumption. Hence the scope of usage of this project sums to be limitless. Ground water is ever changing and its quality can never be accurately predicted. Most environment phenomenon is cyclic in nature. Various conventional methods for prediction and forecasting of water quality are gray system theory, neural network, regression etc. The Linear Regression and Naïve Bayes Classification are used predict and forecast the water quality potability [2].
Totally! They’re a legit resource for sample papers to guide your work. Use them to learn structure, boost skills, and ace your grades—ethical and within the rules.
A geographic information system (GIS) is a system designed to capture, store, manipulate, analyze, manage, and present all types of spatial or geographical data. The acronym GIS is sometimes used for geographical information science or geospatial information studies to refer to the academic discipline or career of working with geographic information systems and is a large domain within the broader academic discipline of Geoinformatics. What goes beyond a GIS is a spatial data infrastructure, a concept that has no such restrictive boundaries. In a general sense, the term describes any information system that integrate, store, edit, analyze, share and display geographic information. GIS applications are tools that allow users to create interactive queries (user-created searches), analyze spatial information, edit data in maps, and present the results of all these operations. Geographic information science is the science underlying geographic concepts, applications, and systems.
In this section, prediction and forecasting of ground water quality, challenges in prediction and forecasting of ground water in wells have been discussed.
Starts at $10/page for undergrad, up to $21 for pro-level. Deadlines (3 hours to 14 days) and add-ons like VIP support adjust the cost. Discounts kick in at $500+—save more with big orders!
Ground water is ever changing and its quality can never be predicted accurately. Most environment phenomenon is cyclic in nature. These cycles usually have a long time period and are hard to recognize. To determine the potability of ground water is very difficult task. It is significant issue to predict water quality which increases economic efficiency as a result. Prediction of water quality has been complicated issue due to complexity and diversity. The data affects the quality of water and accuracy to forecast. Meantime, the models that are constructed on predicted accuracy in monitoring the data is very hard.
Balakrishnan, Saleem and Mallikarjun [3] studied the Spatial variations in ground water quality in the corporation area of Gulbarga City located in the northern part of Karnataka State, India, have been studied using geographic information system (GIS) technique
Najafabadi [4] made use WATMAPGIS (creating of the groundWATer MAPs using Geographical Information System) software, which was prepared by in order to create groundwater observation maps on environment of ArcInfo GIS software. Groundwater monitoring results can be mapped and made questionable rapidly, thus, fluctuations of groundwater on a monthly basis can be rapidly displayed.
Fangjie Cong , Yan Fang Dieo [5] developed mix structure with Client/Server (C/S) and Browser/Server (B/S) is adopted and the system is separated into eight modules: groundwater exploitation management, groundwater permission management, groundwater record management, groundwater level monitor management, groundwater quality monitor management, groundwater payment management, official document management and information publishing management. Two of the developed modules, ground water level monitoring and quality monitoring, are useful to represent the values of the parameters and the ground water level in spatial form.
100%! We encrypt everything—your details stay secret. Papers are custom, original, and yours alone, so no one will ever know you used us.
Hong Mei, Zhang [6] calculated the relative risk map of groundwater pollution that was estimated through a procedure that identifies, cell by cell, the values of three factors, that is inherent vulnerability, load risk of pollution source, and contamination hazard. Early warning model of groundwater pollution combined with Component Object Model Geographic Information System(COMGIS) technology organically, the regional groundwater pollution early- warning information system was developed, and applied it into Qiqiha’r, China.
Zhenghua Yang, Yan [7] combines the Ordinary Kriging Interpolation method with the techniques from geographic information systems to analyze the dynamic changes of the groundwater depth and the groundwater level in Minqin Oasis in Januarys between 1998 and 2010. The results show that: (1) the groundwater depth is greater in the south while it is less in the north. It has declined year by year and the annual mean decline rate was 0.55m/a. (2) The groundwater level is higher in the southwest whereas lower in the northeast and it has been continuously decreasing year by year..
Mei, Song, Lijie [8] worked on the theory of GIS for the Evaluation and Management of groundwater and a network of hydro-geological information systems has been developed. Model base management subsystem is designed, in which parameter calculation model, mining prediction model, and groundwater resources management model are included. Groundwater resource management model is build and applied in practice. Groundwater resources management model offers support for optimizing management and rational use of water resource.
Sonam Pal, Rakesh Sharma [9] proposed an intelligent system to support making decisions for establishment of new resources on any geographical area. Using GIS the whole information of any geographical area can be obtained. The GIS and Artificial Intelligence (AI) techniques are used here with spatial database.
Nope—all human, all the time. Our writers are pros with real degrees, crafting unique papers with expertise AI can’t replicate, checked for originality.
K. A. Mogaji, H. S. Lim & K. Abdullah [10] developed a groundwater vulnerability prediction modeling, based on geographic information system-based ordered weighted average (OWA)-DRASTIC approach. This was investigated in southern part of Perak, Malaysia. The results obtained confirmed that the methodology hold significant potential to support the complexity of decision making in evaluating groundwater pollution potential mapping in the area.
Efren L. Linan, Victor B. Ella, Leonardo M. Florence [11] conducted to assess the vulnerability of groundwater resource to contamination with the application of engineering technologies such as the DRASTIC model in combination with Quantum Geographic Information System (QGIS) in Boracay Island, Aklan, Philippines. The study exhibited the combined use of the DRASTIC model and Quantum GIS as an effective method for groundwater contamination vulnerability assessment.
Muzafar N. Teli1, Nisar A. Kuchhay [12] studied water samples that were collected from 92 locations. The water samples were analyzed for physico-chemical parameters like pH, Total hardness, Ca, Mg, Fe, F, SO4, NO3, K, Cl and Na using standard techniques in the laboratory by the Department of Public Health Engineering, Srinagar (PHE) and Central Groundwater Board Jammu (CGWB) and compared with the WHO standards. The results obtained in the study and the spatial database established in GIS will be helpful for monitoring and managing ground water pollution in the study area.
Valmik and Meshram [13] proposed Bayesian model for prediction of water level, since Bayesian prediction model has ability to learn new classes. The accuracy grows with the increase of learning data. The issues of proposed approach is that if the predictor category is absent in the training data, the Bayesian algorithm accepts that a new record with that category has zero probability. According to this paper, Bayesian model for water level prediction provides good accuracy. The parameters considered are station level pressure, mean sea level pressure, temperature, and rainfall. Some parameters are ignored which are less relevant features in the dataset for model computation.
Our writers are degree-holding pros who tackle any topic with skill. We ensure quality with top tools and offer revisions—perfect papers, even under pressure.
Kannan, Prabhakaran and Rramachandran [14] computed Pearson coefficient for five years data and then compared with predicted data using regression approach. Here, the prediction of water level and rainfall is by using multiple linear regression method. The predicted values lied in the range of 80% as compared to the computed values. According to the results, it does not show high accuracy but show an approximate value from the computed model.
Changjun and Qinghua [15] employed a grey clustering model to measure water quality. The evaluation results were based on assessment of quality of water of twenty divisions in Suzhou River and was equated with that of conventional framework, the performance of proposed grey clustering model is practically feasible. Its application is quite simple. The
proposed algorithm which can defeats the drawbacks of single factors can contemplate the quality of water at present.
James, Bavy and Tharam [16] employed Improved Naive Baye’s Classifier (INBC) technique which digs into the genetic algorithms (GAs) for selection of a subset of input features in classification problems. According to the performance, two schemes is established, scheme I considers all basic input variables for water quality prediction and scheme II considers the optimum subset of input variables to be chosen. According to the results predicted INBC achieved 90% accuracy rate on the potable and not-potable classification problems.
Experts with degrees—many rocking Master’s or higher—who’ve crushed our rigorous tests in their fields and academic writing. They’re student-savvy pros, ready to nail your essay with precision, blending teamwork with you to match your vision perfectly. Whether it’s a tricky topic or a tight deadline, they’ve got the skills to make it shine.
Vanessa L Lougheed etal, [17] conducted a study to examine the water quality based on the relationship among the different salts like chlorine, fluorine, chlorophyll, nitrate and pH. For the determination of water quality, linear regression was used and for the deriving the relationship, Pearson correlation co-efficient was used. The results obtained showed that the relationship obtained among the parameters were accurate but the results of the linear regression were found to be approximate and not highly accurate.
Wang, Hui-hua sheng [18] proposed General regression neural network (GRNN) algorithm for annual quality prediction in Zhengzhou. The results of the proposed model have better benefits in fitting and prediction. Accuracy predicted by GRNN model is better than BP. The stepwise regression algorithm is deficient to BP and GRNN models in accuracy of simulation and prediction results. The drawback of this method is that it cannot be efficiently applied for multivariate analysis.
Wen-jei Wu,Yan Xu [19] discussed Pearson’s correlation coefficient, which can reflect the correlative information between two variables, is adopted to analyze the correlation between the functions of parameters. The result is that the related variables take on positive correlations with the main functions of the parameters, though with certain differences in the degree of the correlation.
Sinin, Nannan etal., [20] proposed back propagation (BP) and radial basis function (RBF) for evaluation of water quality in Fuyang River in handan city. The results of evaluation of water quality grade are III. RBF is more super ordinate to Back Propagation neural network training process. Back Propagation NN falls into local minimum. RBF with Guassian function is used for evaluating water quality in Fuyang River in handan city. The traditional methods usually need to determine the connection weights artificially. So this seriously impacts the reliability of evaluation models.
Guaranteed—100%! We write every piece from scratch—no AI, no copying—just fresh, well-researched work with proper citations, crafted by real experts. You can grab a plagiarism report to see it’s 95%+ original, giving you total peace of mind it’s one-of-a-kind and ready to impress.
Dingsheng and Yufeng [21] employed the annual average water quality prediction model based on multi linear Regression with the combination of stepwise discriminant framework and applied Bayesian statistical framework to improve accuracy of predicted model, but the overall performance can still be further improved. The proposed method provides satisfactory experimental results based on predicted accuracy.
Scott, Amy and Ming [22] developed a technique making use of continuous Bayesian Networks for prediction of water quality potability. This network trains on subset of data as well as subset of the attributes in data. This model generated for uninterrupted domains and these are applied to discover significant parameters in dataset and discover relationships among the parameters. They apply linear Gaussian distributions between those ensembles, allowing effective network level inference. By this approach, they are capable to constitute nonlinear relationships.
Ramakrishnaiah, Sadashivaiah [23] developed a model that is aimed at assessing the water quality index (WQI) for the groundwater of Tumkur taluk. This has been determined by collecting groundwater samples and subjecting the samples to a comprehensive physicochemical analysis. For calculating the WQI, the following 12 parameters have been considered: pH, total hardness, calcium, magnesium, bicarbonate, chloride, nitrate, sulphate, total dissolved solids, iron, manganese and fluorides. The WQI for these samples ranges from 89.21 to 660.56. The results of analyses have been used to suggest models for predicting water quality. The analysis
reveals that the groundwater of the area needs some degree of treatment before consumption, and it also needs to be protected from the perils of contamination.
Yep—APA, Chicago, Harvard, MLA, Turabian, you name it! Our writers customize every detail to fit your assignment’s needs, ensuring it meets academic standards down to the last footnote or bibliography entry. They’re pros at making your paper look sharp and compliant, no matter the style guide.
Ruiz-Luna, Berlanga-Robles [24] developed a model which analysed that along with salts, the changes in maroons and agriculture also have effect on the water quality in the area of study using linear regression. Also a thematic map for each sub-image was produced using supervised classification with the Extraction and Classification of homogeneous objects algorithm (ECHO). Aerial photography and field verification of testing points were used to validate the classification and to assess its accuracy using the Kappa co-efficient.
Qiang Zhang,Chong-yu Xu,Jiang [25] concluded that annual maximum stream flow and annual maximum water level and their variations exert most serious influences on human society. In this paper, temporal trends and frequency changes at three major stations of Yangtze River, i.e. Yichang, Hankou and Datong representing upper, middle and lower reaches, respectively, detected with the help of parametric t-test, Mann–Kendall (MK) analysis, linear regression and wavelet transform methods. Linear regression was developed to predict the water level and maximum stream flow. The results proved that water stream flow affects the water levels in the River under study, thus making it difficult for prediction of water level.
Wengao, Nakai [26] developed two models for the prediction of water quality. They are artificial neural networks and Linear Regression. This study proves that Linear Regression method is more accurate and produces accurate results with a root mean square values of prediction were 0.144 and 0.949, 0.232 and 0.868 and 0.234 and 0.815 for certain parameters in consideration. The superiority of the Linear Regression approach is due to high prediction accuracy and the ability to compute the quality of water considering multiple parameters.
Shoba, Shobha [27] researched on data mining techniques. Data mining methods may be classified by the function they perform or by their class of applications. Using this approach, four major categories of processing algorithms and rule approaches emerge: 1) Classification, 2) Association, 3) Sequence and 4) Cluster. This paper explores various data mining techniques like Artificial Neural Network, Back propagation, MLP, GRNN, Decision Tree etc. used in
For sure—you’re not locked in! Chat with your writer anytime through our handy system to update instructions, tweak the focus, or toss in new specifics, and they’ll adjust on the fly, even if they’re mid-draft. It’s all about keeping your paper exactly how you want it, hassle-free.
prediction of water quality. This survey focuses on how data mining techniques are used for water quality Prediction.
The above methods indicate various machine learning techniques used for forecasting of ground water quality potability and ground water levels. These techniques did not consider the interdependencies among the parameters. So for the proposed system, a multivariate model is developed using the techniques of Linear Regression improved with Pearson co-efficient for forecasting water quality parameters and the water levels for the upcoming year and for classification of water sample, Naïve Baye’s classifier method is used.
Groundwater plays a vital part in the economic system, especially the farmers of the Chikballapur district. Chikballapur is also called land of Silk and Milk. Here agriculture is based on irrigation facility by tanks constructed earlier. Due to drought situations, farmers depend on bore wells for irrigation needs. The water in this area is contaminated with a lot of impurities making it unfit for domestic use. Hence, motivation behind this project was partly assistance to the people of the district and partly interest to correlate our knowledge of Computer Science to a social cause that would help in the betterment of the society.
It’s a breeze—submit your order online with a few clicks, then track progress with drafts as your writer brings it to life. Once it’s ready, download it from your account, review it, and release payment only when you’re totally satisfied—easy, affordable help whenever you need it. Plus, you can reach out to support 24/7 if you’ve got questions along the way!
Media streaming architectures typically have high amounts of end-to-end delay. This end-to-end delay in the media streaming architectures can be extremely undesirable to the viewers when it comes to live streaming of events such as sports events. The end-to-end delay should be minimal in OTT streaming compared to direct-broadcast satellite based streaming.
The end-to-end delay in media streaming architectures is typically due to the three hops in architectures. The first hop is the path generally between the encoder at the customer premises and the entry point of the media streaming architecture. The middle hop the path the media stream travels through the different components of the architecture. The last hop is the path between the delivery servers to the client device. To reduce the latency that arises from these three hops we focus on accelerating the media streams in these three hops. The first-mile acceleration, middle-mile acceleration and last-mile acceleration correspond to the first hop, middle hop and last hop respectively. The middle-mile latency can be reduced by utilizing techniques such as content prefetching, caching, compression. Middle-mile latency can be drastically reduced by enhancing the protocols used in the middle-mile. The last-mile latency is becoming significantly becoming less concerning as many users these days are adopting high speed broadband connections.
The first-mile latency that exists in the path between the origin infrastructure and the entry point of the media streaming architecture contributes a significant amount to the end-to-end latency. The first-mile latency is due to the limiting bottlenecks of the underlying protocols used such as Transmission Control Protocol (TCP).
Need it fast? We can whip up a top-quality paper in 24 hours—fully researched and polished, no corners cut. Just pick your deadline when you order, and we’ll hustle to make it happen, even for those nail-biting, last-minute turnarounds you didn’t see coming.
The primary objective of this project is to improve the first-mile acceleration in media streaming. Some of the other objectives are:
The system aims to determine the potability of ground water samples which has vast scope in today’s market. The potability classification results can also be used to take necessary actions for improvement of water quality. Further, this model can be used as a basic model and extensive optimization approaches can be applied to it for better accuracy in near future. This system can be used by the Government to keep track of the various viable groundwater sources in the state. It can be used by the pollution controls boards to keep track of levels of pollution in bore wells that contain water that is not fit for consumption. Hence the usage of this project seems to be limitless.
The existing TCP based transport in the first-mile is replaced by a UDP based transport protocol implement by Google called Quick UDP Internet Connections (QUIC). This protocol reduces the latency compared to TCP. The media streams captured using the hardware devices (Camera, Microphone, etc) are encoded using an encoder. The encoder compresses the streams, encodes into formats and packages them into containers. The encoder simulator used here is FFmpeg. The streams are then passed into a Gstreamer pipeline. A QUIC server running on the encoder system transports the streams from the pipeline to the ingest server via a plugin. The plugin connects the Gstreamer pipeline to the QUIC server.
A QUIC client running on the ingest server which is the entry point to the media streaming architecture accepts the streams from server. The streams are transported from the encoder to the ingest server using QUIC protocol in the form of QUIC packets. The streams at the ingest server are transported to the mid-tier architecture for delivery.
Absolutely—bring it on! Our writers, many with advanced degrees like Master’s or PhDs, thrive on challenges and dive deep into any subject, from obscure history to cutting-edge science. They’ll craft a standout paper with thorough research and clear writing, tailored to wow your professor.
This section gives a broad picture of the various chapters in the report.
This chapter deals with the introduction to the machine learning techniques and the research gap. It also discuss about the Motivation, Problem Statement, Objective.
We follow your rubric to a T—structure, evidence, tone. Editors refine it, ensuring it’s polished and ready to impress your prof.
FUNDAMENTAL CONCEPTS
Over the last few years, many attempts were considered to make the Internet faster. The Internet Service Providers (ISP) try to decrease the web page load times by using efficient and fast networking devices and by increasing the bandwidth. The web browser developers try to make the browsers faster and efficient to make Internet surfing faster.
Send us your draft and goals—our editors enhance clarity, fix errors, and keep your style. You’ll get a pro-level paper fast.
Transmission Control Protocol (TCP) has been the backbone of the Internet from many years. Today, TCP along with User Datagram Protocol (UDP) are the most used protocols in the Transport Layer. TCP is a connection oriented protocol, whereas UDP is a connectionless oriented protocol. Both TCP and UDP have their own disadvantages and advantages. TCP provides a reliable connection using the three-way handshake. UDP provides an unreliable connection by using a “send-and-forget” method of sending a packet. Though TCP provides a reliable connection using a three-way handshake to establish a connection, this increases the latency to establish a connection. This can be a problem if the connection is short-lived and not persistent.
Another problem in TCP is that only a single HTTP/1.1 Request/Response can be carried in a TCP segment. Also, in a single HTTP/1.1 session, it is possible to send a large number of small segments. But, this can result in a large overhead.
Moreover, since clients always initiate HTTP/1.1 transfers, the performance of HTTP/1.1 decreases significantly when embedded files are loaded, because the server has to wait for a request from the client, even if the server knows that a specific resource is needed by the client.
Head-of-line (HOL) blocking is another problem that is found in TCP. HOL occurs when consecutive packets arrive at the destination after a packet that was supposed to arrive before the consecutive packets is lost. The host at the receiving side has to the wait till the retransmission of the lost packet before it can start processing the consecutive packets that arrived after the packet was lost. In applications of media streaming, a small number of packets lost does not have a significant influence on the user experience. But, the receiver has to wait for the lost packet before it can start playing the video. This problem with TCP can be overcome by opening multiple connections between the same endpoints. This works well for small number of connections. But, when many connections are opened simultaneously, then the connections tend to sway between very small and very large congestion windows. This leads to less throughput and consequently leads to bad user experience.
QUIC (Quick UDP Internet Connection) is an experimental protocol developed by Google. QUIC aims to combine the advantages of TCP and UDP to a single unified protocol. The protocol supports a set multiplexed connections over UDP, and was designed to provide security protection equivalent to TLS/SSL, along with reduced connection and transport latency. An experimental implementation is being put in place in Chrome by a team of engineers at Google.
Yep! We’ll suggest ideas tailored to your field—engaging and manageable. Pick one, and we’ll build it into a killer paper.
QUIC uses UDP and many of its mechanisms are inspired by TCP. The QUIC protocol uses acknowledgements like TCP to convey to the receiver that segments have arrived at the destination. The loss recovery and congestion control mechanism of QUIC is a reimplementation of TCP Cubic with additional enhancements [2]. The TCP cubic mechanism is enhanced for high bandwidth and high latency networks [3]. The QUIC protocol makes use of a timer for retransmission. Every segment that is not acknowledge within the period of the timer, will be considered as lost by the protocol.
QUIC uses fast retransmit mechanism to avoid the retransmission timeouts. This is triggered when the sender receives three or more duplicate segments of acknowledgements (ACKs). A duplicate ACK is an acknowledgment for a segment that has been acknowledged before, and indicates a packet loss. When a retransmission timeout occurs, the congestion window will be set to the maximum segment size. The fast retransmit mechanism sets the congestion window to a value dependent on the value before the loss. The connection will stay in the congestion avoidance phase. As it is done after a retransmission timeout occurs, it does not start with a new slow start.
The congestion control mechanism in QUIC is inspired by TCP and uses many mechanisms that TCP uses. Moreover, QUIC also has new mechanisms of congestion and flow control that are exclusive to this protocol.
Connections between the sender and receiver in QUIC are established in 0-RTT or at most 1-RTT. When a QUIC client connects to a server that it has never connected to before, i.e., it connects to the server for the first time then the QUIC client sends an empty inchoate packet called as Client-HELO (CHLO). When the QUIC server receives this packet, the server responds with a rejection packet (REJ) along with the configuration of the server and the Secure Sockets Layer (SSL) certificates. The QUIC client then uses this information to send another CHLO packet which is accepted by the server. After this, all the data that is sent is authenticated and encrypted. When the QUIC server is known by the QUIC client, the inchoate packet CHLO is not necessary. This results in a 0-RTT handshake connection establishment. When the server is unknown the inchoate CHLO packet has to be sent to the server resulting in a 1-RTT handshake connection establishment.
Yes! Need a quick fix? Our editors can polish your paper in hours—perfect for tight deadlines and top grades.
QUIC uses a randomly selected 64bit connection identifier by the client. Multiple streams are used within each connection to transport the segments. This feature of QUIC allows the clients to establish mobility of connection across many different IP addresses and UDP ports. Many ports can be used for an application but the application has to listen to each of these ports. Also, QUIC provides migration of connection across end points. This means that the QUIC connection can remain established even when the IP address of one or both of the end points changes.
Since QUIC protocol supports multiple different streams within a single connection, it addresses the issue of head-of-line (HOL) blocking by sending independent data via respective different streams. Every stream within a connection is identified by a stream identifier (stream ID). Collisions are avoided between server and client regarding the stream ID by using an even stream ID when the client initiates the stream and by using and odd stream ID when the server initiates the stream. Each participant will increase the stream ID monotonically for every new
stream that is created.
In TCP same sequence numbers are used for the original segments and the retransmitted segments. This causes a problem in TCP as the host cannot differentiate between the original segments and the retransmitted segments. To solve this problem, QUIC uses sequence numbers that are always increasing for every segment. Therefore, even the retransmitted segments have unique sequence numbers.
QUIC has a better signaling than that of TCP. As QUIC uses unique sequence numbers for both the original segments and the retransmitted segments a receiver can easily differentiate between an original packet and its retransmitted equivalent. Therefore, the sender can calculate the RTT with greater accuracy as the difference between a delayed acknowledgement and acknowledge of retransmitted packet can be easily recognized.
TCP uses selective acknowledgements (SACKs) to inform the sender which packets are received so that the sender can retransmit the lost segments. The QUIC protocol instead of SACKs uses negative acknowledgements (NACKs) with a bigger range of up to 255 instead of 3 Instead of implicating non-acknowledge packets by not send ACKs the QUIC protocol uses NACKs to directly report the lost packets. Compared to the SACKs that TCP uses, it is more advantageous when QUIC uses NACKs as it has a larger range.
Sure! We’ll sketch an outline for your approval first, ensuring the paper’s direction is spot-on before we write.
QUIC uses forward error correction (FEC) and sends additional data along with every packet so that when a packet is lost, it can use this additional data to reconstruct the lost packet. This technique uses a combination of XOR operations to reconstruct the lost packets. This technique is simple and effective. But, this technique will not work when multiple packets are lost within a group. Packets cannot be reconstructed in such case.
Packet pacing is a technique used in QUIC that sends packets at a rate less than the full rate to make the transmission less bursty. This technique is very useful and effective in conditions with low bandwidth.
When TCP is used, the headers and the payload are not encrypted as long as no additional protocol is used. Additional protocols have to be used along with TCP to authenticate and encrypt the data. QUIC uses SSL to always encrypt and authenticate the data. The handshake in QUIC is inspired by the handshake of TLS.
This chapter introduces the basic concepts used in the project. Google QUIC can be used for ingestion of streams from the encoder to the entry point of the media streaming architectures. As the path from the encoder to the entry point experience high packet loss and RTT, it is ideal to use a UDP based protocol such as QUIC.
SOFTWARE REQUIREMENTS SPECIFICATION
Software Requirements Specification (SRS) [45] is detail description of system behavior that is constructed. It integrates the non-functional and functional requirements for software to be constructed. The functional requirements describe what exactly the software must do and the non-functional requirement includes the constraint on the design or implementation of the system. A function is described as a set of inputs, the behavior, and outputs. A non-functional requirement is a requirement that specifies criteria that can be used to judge the operation of a system, rather than specific behaviors.
Definitely! Our writers can include data analysis or visuals—charts, graphs—making your paper sharp and evidence-rich.
This section describes the general factors which affects system and the requirements. The software developed should provide acceleration of streams in the first-mile. This section also deals with user characteristics, constraints on using the system and dependencies of the system on other applications.
The System should be versatile and easy to use. It should be scalable easily and the response time should be quick. Maintenance should be easy. The System is composed of several modules performing different tasks. The System developed is easy to deploy and use.
The intended audiences for this system are businesses that aim to deliver video content at the highest quality, scale rapidly to a large number of users and provide high quality viewing experience. This system is easy to deploy and provides an efficient and reliable delivery of media streams. This system is secure as data that is sent from the encoder is always encrypted. This system can be used to broadcast live events such as music concerts, Olympics, sports events, speeches, etc.
The primary function of the system is to support live streaming video delivery while optimizing the quality, scalability and security. The system should be able to integrate with an encoder. The media encoder takes a real-time audio-video signal from an audio-video hardware device. In case of HLS, the encoder will encode the media using an encoding format like H.264 for video and AAC format for audio and encapsulate it in a MPEG-2 Transport Stream. A media segmenter will be used to segment the MPEG-2 Transport Stream to a series of individual segments of equal length. An index file is created by the encoder that contains references to the segmented files along. The QUIC client running on the encoder machine takes segments from the encoder and POSTs to the QUIC server running on the ingest server machine. The segments and index file are retrieved by the QUIC server and forwarded to the ingest server. The ingest server then transfers the media segments to the streaming mid-tier.
The project is to develop a software model which can be effectively used by water development authorities as well as some features may be useful to the common man. It is useful for governments and municipal authorities, who can take help of FCGWS results, devise desalination solutions and take necessary rectification steps if needed. For the common man, this can be used to inform him about the conditions of water in a particular location and help him avoid using it for drinking purposes if it is contaminated. The users of the system would prefer a quite accurate model for their use and also expect the system to be fast and user friendly.
The encoder at the media generation infrastructure should accept the audio-video signals generated and send it to the entry point of the media streaming architecture. Any loss of data from the encoder can lead to insufficient data being transferred into the streaming architecture. This will result in bad viewing experience for users. Hence, there should be no loss of data from the encoder.
We’ve got it—each section delivered on time, cohesive and high-quality. We’ll manage the whole journey for you.
The project is dependent on the following factors:
This section covers all SRS [45] to level of sufficient details to make use of these details by the designers to design a system to satisfy requirements. The product perspective and user characteristics does not state the actual requirements needed that the system to be satisfied. The specific requirements are actual data with which the customer and software provider can agree. The final system is expected to satisfy all the requirements mentioned here.
The functional requirements of the system are: –
Yes! UK, US, or Aussie standards—we’ll tailor your paper to fit your school’s norms perfectly.
Traditional methods of media streaming use TCP as the transport protocol for transporting the streams from the encoder to the streaming architecture. UDP based streaming can improve the performance by reducing the latency.
Performance requirements include the following:
The system is web based. Thus it should support all the available browsers. It must be deployed on the recent versions of the Apache Tomcat server.
If your assignment needs a writer with some niche know-how, we call it complex. For these, we tap into our pool of narrow-field specialists, who charge a bit more than our standard writers. That means we might add up to 20% to your original order price. Subjects like finance, architecture, engineering, IT, chemistry, physics, and a few others fall into this bucket—you’ll see a little note about it under the discipline field when you’re filling out the form. If you pick “Other” as your discipline, our support team will take a look too. If they think it’s tricky, that same 20% bump might apply. We’ll keep you in the loop either way!
The different software requirements required by the application are as follows: OS :Ubuntu 14.04 LTS or higher
Platform : Linux
Language : C and C++
Visual Interface : HTML, CSS.
Our writers come from all corners of the globe, and we’re picky about who we bring on board. They’ve passed tough tests in English and their subject areas, and we’ve checked their IDs to confirm they’ve got a master’s or PhD. Plus, we run training sessions on formatting and academic writing to keep their skills sharp. You’ll get to chat with your writer through a handy messenger on your personal order page. We’ll shoot you an email when new messages pop up, but it’s a good idea to swing by your page now and then so you don’t miss anything important from them.
IDE/tool : Eclipse Neon.2 Release (4.6.2)
The following points describe the minimum hardware requirements for the running of the application.
Processors : x86 or x64 based processor.
RAM : 4GB.
Storage : 100GB.
The system is flexibly designed. The amount of data available is not optimum and more
data can make the application more accurate. The design constraints include a user friendly UI. It
should be able to handle large amounts of data in quick time. To view the Google maps that indicates the location of the wells, the potability status and the water level in those wells, internet is required.
This section describes the interfaces in detail. The section below describes about the user interfaces implemented in the system.
User interfaces include options for selecting operations like browse the related data, to see the forecasting respectively. The following are some of user interface options provided in the system:
Since the system is implemented in Java, it can run on any system which supports JVM. The following are softwares used in this system:
These are the requirements that specify criteria that can be used to judge the operation of the system rather than specific behavior and are not directly concerned with the specific functions delivered by the system [45].
The specific requirements and constraints of this project have been detailed in this chapter. These include the hardware requirements, software requirements and functional requirements. Also, this chapter cites the various assumptions being made by the developer of the water quality potability and rainfall forecasting. All these have to be managed while using the system.
HIGH LEVEL DESIGN
Design is significant phase in development of software. It is basically a creative procedure which includes the description of the system organization, establishes that it satisfies the functional and non-functional system requirements [47]. Larger systems divided down into smaller sub-systems contain services that are related to each other. The output in design phase describes the architecture of software to be used for the development of the common endpoint service. This section depicts the issues that are required to be covered or resolved before attempting to devise a complete design solution.
The detailed design includes an explanation for all the modules. It throws light on the purpose, functionality, input and output.
The software specification requirements have been studied to design the system using Linear Regression improved with Pearson Co-efficient [16] and Naïve Baye’s Classification[18] , in which previous data is trained and used to predict and forecast by the system.
There are several design consideration issues that need to be fixed before designing a solution for the system to be implemented. The following sections describe constraints that have heavy impact on the software, a method or approach used for the development and the architectural strategies. It also describes the overview of the system design.
General constraints which need to be considered to use the system are listed below:
The design method employed is highlighted in this section:
The overall organization of the system and its high level structure is provided by this section and also provides the key insight into the mechanism and strategies used in system architecture.
Java programming language is used to design the system. Java supports object oriented programming, wide range of data types and application programming interface for handling the data. The user interface is developed using HTML, CSS and JavaScript.
Java [32][46][47] is the optimal programming language for the project because of the following reasons:
The GUI of the system is a multi-window system with simplified transfer of control from one window to another. The output is shown in a window which has the predicted value as well as a graph to showcase the results is available. User selects the well number in the available set of wells and water quality, and ground water level forecasting is obtained.
Error detection and recovery is an important aspect of the implemented project. Exceptions may occur, if the admin enters wrong data type or if any of the fields are left blank in inserting the new data for upcoming years. Internal error detection is done using the try-catch blocks.
Data storage management is necessary for efficient nature of the program. It ensures that all the data generated as results must also be fed back into the training set and stored appropriately. This is applicable to both the prediction and classification as the model continuously learns after every processing. The training data is passed to the system and the results are stored back in database and hence retrieved for plotting the graph.
This process is focused on basic structure of model in the system. It deals to identify major modules in system and communications amongst these modules. The approach used for the development of the common endpoint service is object oriented wherein the system is classified into different objects, which represent real world entities. The architecture is depicted in the figure below.
The system architecture is as shown in Figure 4.1. Initially, the collected data is inserted into the database. The data from database is retrieved and the trained using the algorithm of Linear Regression improved with Pearson Co-efficient and the values of salts and water level is forecasted for the next year and the data is inserted back into database. The forecasted salts’ values are retrieved from the database and classified as potable or not using Naïve Baye’s classifier method. Finally, the values of the salts of water in different wells are represented spatially on GIS maps using ARCGIS software.
A Data Flow Diagram (DFD) is graphical representation of the “flow” of data through an information system [45]. Data Flow models describe how data flows through a sequence of processing steps. DFD is composed of four elements, which are process, data flow, external entity and data store. With data flow diagram, the users can easily to visualize the operations within the system, what can be accomplished using the system and
implementation of the system. DFDs provide the end users an abstract idea regarding the data that is given as input to the system, the effect that the input will ultimately have upon the whole system.
The level 0 DFD describes general operation of the system. There are two modules. They are water quality potability and Ground Water level prediction. The Data is collected from CGWB and the relevant data are passed as input to the corresponding system and predicts the respective outputs. The level 0 Data flow diagram is as shown in the Figure 4.2.
The Level 1 DFD describes more in detail than the Level 0 DFD. This is as shown in the figure below.
The Level 1 DFD in Figure 4.3 clearly shows the splitting up of system into its constituent modules which perform the basic tasks of the software i.e. forecasting. The prediction module performs the prediction of the groundwater level and water quality potability. The system takes as input the water quality data and ground water level from the database and will give as output, the respective water quality potability and ground water level forecasting to the government.
The process in level 1 is expanded here. The Level 2 DFD for water quality potability forecasting is shown in Figure 4.4 and the Level 2 DFD for water level forecasting is shown in Figure 4.5.
Figure 4.4 shows level 2 DFD for water quality potability forecasting. The water quality data from the CGWB [1] is taken as input and fed into the system. The system takes the data from the database, trains the system with previous years’ data using the process of Linear Regression with Pearson co-efficient. Then the values of the salts are forecasted for the next year. These values are input to the Naïve Baye’s classifier method. The data is classified whether good or bad for drinking purpose. Finally, the values of the salts are spatially represented in GIS maps.
Figure 4.5 shows level 2 DFD for water level forecasting. The water level data from the CGWB [1] is taken as input and fed into the system. The system takes the data from the database, trains the system with previous years’ data using the process of Linear Regression with Pearson co-efficient. Finally, the values of the salts are spatially represented in GIS maps.
The Level 3 depicts how the system is further split up into sub-processes of the system, each sub-processes provide functionality of whole system. The Level 3 DFD for water quality potability forecasting is shown in Figure 4.6 and the Level 3 DFD for water level forecasting is shown in Figure 4.7.
Figure 4.6 shows level 3 DFD for water quality potability forecasting. The water quality data from the CGWB [1] is taken as input and stored in the database. The system takes the data from the database, and it is trained with previous years’ data using the process of Linear Regression with Pearson co-efficient. Then the values of the salts are forecasted for the next year. These values are inserted back into the database. The values are taken from the database and are input to the Naïve Baye’s classifier method. The data is classified whether good or bad for drinking purpose. The values to be spatially represented are converted into ARCGIS software readable format and then the values of the salts are spatially represented in GIS maps.
Figure 4.7 shows level 3 DFD for water level forecasting. The level data from the CGWB [1] is taken as input and stored in the database. The system takes the data from the database, and it is trained with previous years’ data using the process of Linear Regression with Pearson co-efficient. Then the values of the water levels are forecasted for the next year. These values are inserted back into the database. The values to be spatially represented are converted into ARCGIS software readable format and then the values of the water levels are spatially represented in GIS maps.
The above data models depict how data is processed by the system. This constitutes the analysis level. The notations applied above represent functional processing, data stores and data movement amongst the functions. The purpose of chapter is to describe major high- level processes and interrelation in the system. All the above mentioned levels of DFDs illustrate this.
DETAILED DESIGN FOR FCGWS SYSTEM
In the Detailed Design phase [45], the internal logic of every module specified in High Level Design (HLD) is determined. Specifically, in this phase the design of each module, the low level components and subcomponents are described. After determining HLD [46] graphical representation of the software system being developed is drawn. Each module‘s input and output type, along with the possible data structures and algorithms used are documented during the detailed design phase. The following sections provide such information of the modules
The structure chart [45] shows the control flow among the modules in the system. It explains all the identified modules and the interaction between the modules. It also explains the identified sub-modules. The structure chart explains the input for each modules and output generated by each module.
In FCGWS, there are three sub modules. They are forecasting, classification and spatial representation. The forecasting module is further divided into two. They are water quality sub module and water level sub module. The description of the sub modules, the flow of data and the results of each sub module are shown in figure 5.1.
The internal working of each of the modules is explained in this section. It also describes the software component and sub-component of the FCGWS system.
This module describes the forecasting of the water quality parameters for the upcoming year on selecting a particular well. There are twelve parameters, whose values have to be forecasted.
After obtaining the two parameters, linear regression technique discussed in Chapter 2, is applied on those two parameters. For example, let the two parameters be fluorine and pH. Then, linear regression method is applied considering fluorine as the input parameter (X) and pH as the output parameter (Y), thus forecasting the pH value for the upcoming year.
This module describes the forecasting of the water level after three seasons for the upcoming year on selecting a particular well.
After obtaining the two parameters, linear regression is applied on those two variables. Here, the two variables are water level and the current year rainfall. Then, linear regression method is applied considering rainfall as the input parameter (X) and
water level as the output parameter (Y), thus forecasting the water level values after three seasons for the upcoming year.
After the water quality parameters are forecasted, the potability status is determined for the upcoming year on selecting a particular well. The Naïve Baye’s classifier method is used.
The forecasted values are spatially represented on GIS maps using the ARCGIS software.
wells is plotted. Once the wells are plotted, the IDW [10] technique is chosen along with required contour interval, thus producing the required GIS maps.
The internal working of the project with the necessary data flow has been described in this chapter. A clear view on control flow within FCGWS was conveyed by the structure chart with the functionality of its modules being explained.
IMPLEMENTATION OF FCGWS SYSTEM
The implementation phase is significant phases in the project development as it affords final solution that solves the issues. In this phase the low level designs are transformed into the language specific programs such that the requirements given in the SRS [45] are satisfied. The phase requires actual implementation of ideas that were described in analysis and design phase.
The technique and the methods that are used for implementing software must support reusability, ease of maintenance and should be well documented.
The programming language chosen to implement the project is Java [32][46][47]. Java is one of the most useful languages in the current age and time with extensive coverage in scientific computing domains like Artificial Intelligence (AI), machine learning, astronomy, computer vision etc. The reasons for choosing Java are listed below:
The system is designed to work on windows operating systems. The reasons to use windows system are mainly due to relative simplicity of the operating systems well abundant documentation available for windows system. Since there is no necessity to manipulate system data structure there is any necessity of open source operating system. Also since Java is used, which is platform independent, it is easy to adopt the project to work on other operating system like Linux.
This section discusses the coding standards followed throughout the project. It includes the software applications that are necessary to complete the project. Proper coding standards should be followed because large project should be coded in a consistent style. This makes it easier to understand any part of the code without much difficulty. Code conventions are important because it improves readability in software, allowing the programmers to understand code clearly.
Naming conventions helps programs in understandable manner which makes easier to read. The names given to packages, scripts, graphs and classes should be clear and precise so that their contents can easily be understood. The conventions followed for this project are as follows:
Classes: Class names must be nouns. The upper camel casing method is followed, in which the first letter of every word is in Capitals including the first word. Example: Class AddWell, PredictPh.
Methods: Methods should be verb. For methods also, the upper camel casing is followed.
Example: GetEc( ).
Variables: Variable names must be short and meaningful. Example: wellno, which indicates the well number for which the prediction is to be made.
The files used to implement the project were organized and kept in certain order based on their types.
Standard declarations conventions are followed. Standard names are given which make it easy to understand the role of each entity declared. Multiple declarations per line are not allowed because of commenting and to reduce ambiguity.
Comments are necessary part of any coding conventions as it improves the understandability of the code developed. Comment lines begin with the character ‘//’, and anything after a ‘//’character is ignored by the interpreter. The // character itself only tells the interpreter to ignore the remainder of the same line.
In the project files, commented areas are printed in grey by default, so they should be easy to identify. There are two useful keyboard shortcuts for adding and removing chunks of
comments. Select the code to be commented or uncommented, and then press Ctrl-K + Ctrl-C for commenting the selected block which will place one //’ symbol at the beginning of each line and Ctrl-K + Ctrl -U to perform the uncommenting. Comments for blocks of code are started by a ‘/*’ and are delimited by a ‘*/’.
Comments are useful for explaining what function a certain piece of code performs especially if the code relies on implicit or subtle assumptions or otherwise perform subtle actions.
To make the documentation easy Java provides special type comments called Java Docs. In the project Java Docs are declared with each and every class and function. Java Doc comment looks like as follows,
/*
*/
This section discusses the difficulties encountered in the development of the project. The main difficulties encountered were in the collection of data and the machine learning algorithms.
Data is needed to train [18][20] models for prediction as well as classification. The data collected was in Microsoft Excel sheets with many of the values missing and extra entries that were random and not useful. Also, data about every well was present in a single file which led to quite a bit of confusion to read data individually for each well.
To overcome these issues many actions were taken. First, the data for each well was put in separate files for clearer access manually and hence the data for certain wells which were consistent were to be found out. Also the unnecessary columns were removed from the data files and only the important parameters needed were selected. Once the required data was obtained, the data was inserted into the database. The data is retrieved using the Java persistence utility called JPA (Java Persistence API). It helps in preventing the SQL injectionattacks[47].
For the spatial representation of the data in GIS maps, the software called ARCGIS is used. The data required for the drawing of the maps should be in a excel file and that file to be in
.xls format (should not be in .xlsx) and this excel file is input to the ARCGIS software to obtain the GIS maps.
The design of user interface for the windows application was a challenging task because it had to be kept very simple and yet the user should be able to access all the results on the click of few buttons. Bootstrap, a front end framework that makes UI design fast and simple, was used simply because it is extensively used in the industry and also user friendly design can be obtained. It was difficult in developing the design using this framework, as there were only a few applications developed using this framework before.
This chapter deals with the programming language, development environment and code conventions followed during implementation of the project. It also explains the difficulties encountered in the course of implementation of the project and strategies used to tackle them.
SOFTWARE TESTING FOR FCGWS SYSTEM
The aim of Software Testing [48] is to detect defects or errors testing the components of programs individually. During testing, the components are combined to form a complete system. At this particular stage, testing is concerned to demonstrate that the function meets the required functional goals, and does not behave in abnormal ways. The test cases are chosen to assure the system behavior can be tested for all combinations. Accordingly, the expected behavior of the system under different combinations is given. Therefore test cases are selected which have inputs and the outputs on expected lines, inputs that are not valid and for which suitable messages must be given and inputs that do not occur very frequently which can be regarded as special cases.
In this chapter, several test cases are designed for testing the behavior of all modules. When all modules are implemented completely, they are integrated and deployed on Tomcat server. Test cases are executed under same environment. Test cases mainly contain tests for functionality of all modules. Once the application passes all the test cases it is deployed on the production environment for actual real time use.
The proposed software mainly deals with the prediction and forecasting of corresponding data. In this system all the inputs to system are given through the User Interface (UI), built in HTML, CSS. The output is also viewed through GUI. For testing software, various test strategies are to be used such as unit testing, integration testing, system testing and interface testing.
There are many software components required to test components of System. Within the NetBeans IDE [46] itself, the system can be debugged by fixing breakpoints in code, then running application in the debugger. The execution of code line by line examines application state in order to discover any problems. In this section, the various parameters which are directly responsible for the performance of the system will be explained.
Unit test concerns in verification effort on the smallest unit of software design, the software components or modules. By testing in this method the bugs occurred will be very clear. The following show the test cases on which this testing is performed.
The following show the test cases for water quality potability forecasting [16] on which this testing is performed. The testing for forecasting of parameters’ values is as shown in the table below.
Sl No. of Test Case | 1 |
Name of Test Case | Forecasting of Water Quality Potability |
Feature being Tested | Forecasted values of potability |
Description | Ground water Sample of selected well is to be forecasted |
Sample Input | Well number |
Expected Output | Salts contents in Mg/L, pH and EC in S/m |
Actual Output | Salts contents in Mg/L, pH and EC in S/m |
Remarks | Successful forecasting |
Table 7.1 shows the values of the salts of water sample for the upcoming year. This test was successful. Here, the user has to input the well number for which the forecasted values have to be displayed. This module successfully displays the forecasted values with the corresponding units.
The accuracy of individual parameters’ values are tested by comparing the predicted values of the parameters with the actual values for the year 2015.
The accuracy test for the parameter pH, is shown in table below.
Sl No. of Test Case | 2 |
Name of Test Case | Prediction of salts (pH) |
Feature being Tested | Predicted value of each salts (pH) |
Description | Ground water Sample of selected well is to be predicted |
Sample Input | Well number |
Expected Output | pH value |
Actual Output | pH value |
Remarks | Successful prediction with 95% accuracy (for the year 2015). |
Table 7.2 shows the accuracy of the parameter pH. The input to this system is the well number. Once the well number is entered, the comparison between the predicted pH value and the actual pH value for the year 2015 is displayed. An accuracy of about 95% is achieved.
The accuracy test for the parameter fluorine is shown in table below
Sl No. of Test Case | 3 |
Name of Test Case | Prediction of salts (fluorine) |
Feature being Tested | Predicted value of each salts (fluorine) |
Description | Ground water Sample of selected well is to be predicted |
Sample Input | Well number |
Expected Output | fluorine value in mg/L |
Actual Output | fluorine value in mg/L |
Remarks | Successful prediction with 97% accuracy (for the year 2015). |
Table 7.3 shows the accuracy of the parameter fluorine. The input to this system is the well number. Once the well number is entered, the comparison between the predicted fluorine value and the actual fluorine value, in mg/L for the year 2015 is displayed. An accuracy of about 97% is achieved.
The accuracy test for the parameter Electrical conductivity (EC) is shown in table
below.
Sl No. of Test Case | 4 |
Name of Test Case | Prediction of salts (EC) |
Feature being Tested | Predicted value of each salts (EC) |
Description | Ground water Sample of selected well is to be predicted |
Sample Input | Well number |
Expected Output | EC value in µ |
Actual Output | EC value in µ |
Remarks | Successful prediction with 80% accuracy (for the year 2015). |
Table 7.4 shows the accuracy of the parameter EC. The input to this system is the well number. Once the well number is entered, the comparison between the predicted EC value and the actual EC value for the year 2015 in µ is displayed. An accuracy of about 80% is achieved.
The accuracy test for the parameter sodium is shown in table below.
Sl No. of Test Case | 5 |
Name of Test Case | Prediction of salts (Sodium) |
Feature being Tested | Predicted value of each salts (Sodium) |
Description | Ground water Sample of selected well is to be predicted |
Sample Input | Well number |
Expected Output | Sodium in mg/L |
Actual Output | Sodium in mg/L |
Remarks | Successful prediction with 87% accuracy (for the year 2015). |
Table 7.5 shows the accuracy of the parameter sodium. The input to this system is the well number. Once the well number is entered, the comparison between the predicted sodium value and the actual sodium value for the year 2015 in mg/L is displayed. An accuracy of about 87% is achieved.
The accuracy test for the parameter chlorine is shown in table below.
Sl No. of Test Case | 6 |
Name of Test Case | Prediction of salts (chlorine) |
Feature being Tested | Predicted value of each salts (chlorine) |
Description | Ground water Sample of selected well is to be predicted |
Sample Input | Well number |
Expected Output | Chlorine in mg/L |
Actual Output | Chlorine in mg/L |
Remarks | Successful prediction with 75% accuracy (for the year 2015). |
Table 7.6 shows the accuracy of the parameter chlorine. The input to this system is the well number. Once the well number is entered, the comparison between the predicted chlorine value and the actual chlorine value for the year 2015 in mg/L is displayed. An accuracy of about 75% is achieved.
The accuracy test for the parameter sulphate is shown in table below
Sl No. of Test Case | 7 |
Name of Test Case | Prediction of salts (sulphate) |
Feature being Tested | Predicted value of each salts (sulphate) |
Description | Ground water Sample of selected well is to be predicted |
Sample Input | Well number |
Expected Output | sulphate in mg/L |
Actual Output | sulphate in mg/L |
Remarks | Successful prediction with 80% accuracy (for the year 2015). |
Table 7.7 shows the accuracy of the parameter sulphate. The input to this system is the well number. Once the well number is entered, the comparison between the predicted sulphate value and the actual sulphate value for the year 2015 in mg/L is displayed. An accuracy of about 80% is achieved.
The test for classification [18] of water quality as potable or not, is in table below.
Sl No. of Test Case | 8 |
Name of Test Case | Classification of water for potability based on salinity |
Feature being Tested | Classification of water for potability |
Description | Ground water Sample of selected well is to be predicted |
Sample Input | Well number |
Expected Output | Classified as potable or not based on WHO standards |
Actual Output | Classified as potable or not based on WHO standards |
Remarks | Successful classification |
Table 7.8 shows the classification test for the water sample. The input to this system is the well number. Once the well number is entered, the forecasted values of the parameters are compared with the WHO standards and if any of the values is not in the range, the sample is classified as not potable or else it is classified as potable.
The ground water level of the wells is forecasted [20] for the upcoming year for which the testing is performed as follows. The forecasted value testing is shown in table 7.9
Sl No. of Test Case | 9 |
Name of Test Case | Forecasting of water level |
Feature being Tested | Forecasted values of water level |
Description | Ground water Sample of selected well is to be forecasted |
Sample Input | Well number |
Expected Output | Water level values in meter from surface(for three seasons) |
Actual Output | Water level values in meter from surface (for three seasons) |
Remarks | Successful forecasting |
Table 7.9 shows the water level values of the upcoming year. This test was successful. Here, the user has to input the well number for which the forecasted values have to be
displayed. This module successfully displays the forecasted values of water level in meters from the surface, after three seasons that is, pre-monsoon, south-west monsoon and north-east monsoon.
The accuracy of the water level after the three seasons has also been tested, by comparing the predicted values with the actual values for the year 2014.
The accuracy of the water level after pre-monsoon season is shown in table below.
Sl No. of Test Case | 10 |
Name of Test Case | Prediction of water level (for pre-monsoon season) |
Feature being Tested | Prediction of values of water level (for pre-monsoon season) |
Description | Ground water Sample of selected well is to be forecasted |
Sample Input | Well number |
Expected Output | Water level values in meters from surface (for pre-monsoon season) |
Actual Output | Water level values in meters from surface (for pre-monsoon season) |
Remarks | Successful prediction with 86% for the year 2015( pre- monsoon season) |
Table 7.10 shows the accuracy of water level after pre-monsoon season. The input to this system is the well number. Once the well number is entered, the comparison between the predicted water level value and the actual water level value for the year 2015 in meters form the surface is displayed. An accuracy of about 86% is achieved.
The accuracy of the water level after south-west monsoon season is shown in table
below.
Sl No. of Test Case | 11 |
Name of Test Case | Prediction of water level (for south-west monsoon season) |
Feature being Tested | Prediction of values of water level (for south-west monsoon season) |
Description | Ground water Sample of selected well is to be forecasted |
Sample Input | Well number |
Expected Output | Water level values in meters from surface (for south-west monsoon season) |
Actual Output | Water level values in meters from surface (for south-west monsoon season) |
Remarks | Successful prediction with 84% for the year 2015( south- west monsoon season) |
Table 7.11 shows the accuracy of water level after south-west monsoon season. The input to this system is the well number. Once the well number is entered, the comparison between the predicted water level value and the actual water level value for the year 2015 in meters form the surface is displayed. An accuracy of about 84% is achieved.
The accuracy of the water level after north-east monsoon season is shown in table
below.
Sl No. of Test Case | 12 |
Name of Test Case | Prediction of water level (for north-east monsoon season) |
Feature being Tested | Prediction of values of water level (for north-east monsoon season) |
Description | Ground water Sample of selected well is to be forecasted |
Sample Input | Well number |
Expected Output | Water level values in meters from surface (for north-east monsoon season) |
Actual Output | Water level values in meters from surface (for north-east monsoon season) |
Remarks | Successful prediction with 88% for the year 2015( north- east monsoon season) |
Table 7.12 shows the accuracy of water level after north-east monsoon season. The input to this system is the well number. Once the well number is entered, the comparison between the predicted water level value and the actual water level value for the year 2015 in meters form the surface is displayed. An accuracy of about 88% is achieved.
The admin of the system can input the actual values for any particular year. Here, the validation of all the fields and correct insertion into the database is tested.
Insertion into the water level table is tested as shown in table below.
Sl No. of Test Case | 13 |
Name of Test Case | Water level table update |
Feature being Tested | Updating the water level data in the table |
Description | Admin inputs the values which should be inserted to database |
Sample Input | Values for season, well number , depth fields etc., |
Expected Output | Insertion into database (if the fields are not null) on submitting |
Actual Output | Insertion into database occurred |
Remarks | Test successful |
Table 7.13 shows the testing of the correct insertion into database. The admin of the system, inputs the actual values of the fields like depth, season, well number etc., and submits this data and the data is correctly inserted into the water level table in the database.
Insertion into the water quality table is tested as shown in table below.
Sl No. of Test Case | 14 |
Name of Test Case | Water quality table update |
Feature being Tested | Updating the water quality data in the table |
Description | Admin inputs the values which should be inserted to database |
Sample Input | Values for well number, values of the salts etc., |
Expected Output | Insertion into database (if the fields are not null) on submitting |
Actual Output | Insertion into database occurred |
Remarks | Test successful |
Table 7.14 shows the testing of the correct insertion into database. The admin of the system, inputs the actual values of the fields like well number, taluk name, values of the salts etc., and submits this data and the data is correctly inserted into the water level table in the database.
Insertion into the taluk table is tested as shown in table below.
Sl No. of Test Case | 15 |
Name of Test Case | New Taluk insertion |
Feature being Tested | Inserting the new taluk into the table |
Description | Admin inputs the values which should be inserted to database |
Sample Input | Taluk name |
Expected Output | Insertion into database (if the fields are not null) on submitting |
Actual Output | Insertion into database occurred |
Remarks | Test successful |
Table 7.15 shows the testing of the correct insertion into database. The admin of the system inputs the taluk name and submits this data and the data is correctly inserted into the taluk name table in the database.
Insertion of new well is tested as shown in table below.
Sl No. of Test Case | 16 |
Name of Test Case | New well insertion |
Feature being Tested | Inserting the new well information |
Description | Admin inputs the values which should be inserted to database |
Sample Input | Values for season, well number , depth , salts fields etc., |
Expected Output | Insertion into database (if the fields are not null) on submitting |
Actual Output | Insertion into database occurred |
Remarks | Test successful |
Table 7.16 shows the testing of the correct insertion of new well into database. The admin of the system inputs the taluk name, the well number, the values of the salts and the water level values and submits this data and the water quality data is correctly inserted into the water quality table and water level data into water level table in the database.
Integration testing is a systematic technique for constructing the program structure while at the same time conducting tests to uncover errors associated with interfacing. The objective is to take unit tested components and build a program structure.
The values of the parameters are forecasted and the forecasted values are inserted into the database. This testing is as shown in table below.
Sl No. of Test Case | 17 |
Name of Test Case | Forecasted water quality values update |
Feature being Tested | Updating the forecasted values of the salts |
Description | The forecasted values of the salts for the upcoming year to be inserted into database |
Sample Input | Forecasted values of the salts |
Expected Output | Insertion into database, the forecasted values of salts. |
Actual Output | Insertion into database occurred |
Remarks | Test successful |
Table 7.17 shows the integration testing of water quality module and insertion of the forecasted values of the salts into the database. The forecasted values of salts for the upcoming year have been successfully inserted into the database.
The values of the water levels are forecasted and the forecasted values are inserted into the database. This testing is as shown in table below.
Sl No. of Test Case | 18 |
Name of Test Case | Forecasted water level values update |
Feature being Tested | Updating the forecasted values of the water levels |
Description | The forecasted values of the water levels for the upcoming year to be inserted into database |
Sample Input | Forecasted values of the water levels |
Expected Output | Insertion into database, the forecasted values of water levels |
Actual Output | Insertion into database occurred |
Remarks | Test successful |
Table 7.18 shows the integration testing of water level module and insertion of the forecasted values of the water levels into the database. The forecasted values of water levels for the upcoming year have been successfully inserted into the database.
System testing [48] is the testing in which all modules, that are tested by Integration testing are combined to form single system. System is tested such that all the units are linked properly to satisfy user specific requirement. This test helps in removing the overall bugs and improves quality and assurance of the system. The Proper functionality of the system is concluded in system testing.
The whole system is evaluated in this system testing, with the forecasting of ground water level and Water quality being tested. The system testing is as shown in table below.
Sl No. of Test Case | 19 |
Name of Test Case | System test |
Feature being Tested | System |
Description | Testing if the whole system is working |
Sample Input | Complete training and test data |
Expected Output | Forecasted values of quality, classification and water level |
Actual Output | Forecasted values obtained |
Remarks | Test successful |
Table 7.19 shows the system testing. Here all the modules are combined and tested. The system should forecast the values of the water quality parameters and compare these values with the WHO standards and display these values along with the status of potability that is if the water sample is potable or not.
It should also display the forecasted values of water levels. On testing, the system successfully met all the requirements.
This chapter includes the general testing process, which starts with unit testing of modules followed by integration testing wherein the modules are merged together. System testing where the entire system is tested for its functionality and correctness is executed, was performed. Finally, the functional testing of user interface is performed and the tests proved successful in most test cases. Changes to failed tests were performed and rectified.
EXPERIMENTAL RESULTS AND ANALYSIS FOR FCGWS SYSTEM
Analysis of a process, experiments are commonly used to evaluate the inputs of which process will have a significant impact on the output of the process, and how much the target level of those inputs should be to achieve a desired result. The output obtained from the system is compared with the truth to verify the correctness of the system [49]. There are several metrics for comparison. Analyzing the experimental output is verifying whether the evaluation metrics are satisfied. This chapter discusses the performance characteristics of the system.
Evaluation metrics are the criteria for testing different algorithms. The behavior of the algorithms or techniques can be determined using these metrics. Some techniques satisfy some of the metrics.
In this project, the outputs that are obtained from the different inputs given to the system are compared with the truth to check whether the metrics are satisfied. The required metrics of evaluation as per that a good technique should be evaluated against are:
To ensure the correctness of a project or a system, it has to be tested under various conditions. For the forecasting of ground water level, the experimental dataset consists of two columns. First column is the date, in which the data was taken for three seasons such as January, May and August months of respective years. The Second column consists of the depth of water level data. The water quality forecasting consists of 12 salts such as pH, EC, Total dissolved solids, Total Hardness, Cl, SO4, NO3, Ca, Mg, Na, K and F in each sample of water to predict and classify the water is potable or not potable. For every module, the user has to input the well number for which the results are to be displayed.. The predictor, forecaster and classifier will give results for the same.
Out of 32 bore wells in Chikballapur district, the consistent data for water level is available only for 2 wells currently. The experimental dataset of ground water level consists of two columns. First column is the date, in which the data was taken for three seasons such as January, May and August month of respective years. The Second column consists of the depth of water level data. The partial dataset is tabulated and shown below in figure 8.1.
8.2.1 Ground water quality training data set
Out of 32 bore wells for Chikballapur district, the water quality data is available for only 2 bore wells of some taluks. The experimental dataset of water quality consists of 12 salts. The amount of salts such as pH, EC, Total dissolved solids, Total Hardness, Cl, SO4, NO3, Ca, Mg, Na, K and F are present in the groundwater has been used to predict the potability of water. The partial dataset is tabulated and shown below in figure 8.2.
This section explains the experimental results of this project. The accuracy of 87% was achievd in forecasting of water quality, 92% in classification and 85% for water level forecasting.
The output obtained from a system is evaluated against the different metrics to measure the performance of the system. Figure 8.1 gives the forecasting of the ground water level data for the upcoming year. The Ground water level consists of three season’s data
which are January, may and august months for respective years. The results are predicted and forecasted accordingly for the seasons.
The figure 8.3 shows the screenshot of experimental results of water level for the upcoming year for the well SW5C2G2. The results for the water level after pre-monsoon season is 1.83 m, after south-west monsoon is 2.07 m and after north-east monsoon is 1.34 m from the surface.
The figure 8.4 shows the screenshot of bar graph that shows the variation of the water level after the three seasons for the last five years. The bar marked in blue indicates the variation of water level values after pre-monsoon season. The one in red indicates the water level values after north-east monsoon season and the one in yellow indicates the values after south-west monsoon.
The output obtained from a system is evaluated against the different metrics to measure the performance of the system. Figure 8.5 gives the forecasting of the ground water quality data for the upcoming year. The Ground water quality consists of twelve parameters and also indicates the status of potability. The results are predicted and forecasted for upcoming year.
The figure 8.5 shows screenshot of the experimental results of water quality for the upcoming year for the well SW5C2G2. The values for the twelve parameters are as shown in figure 8.5. The parameters indicated by red represent the values that are not in the range according to the WHO standards and responsible for indicating the potability.
The figures 8.6 through 8.8 show the variations of the water quality parameters such as EC, pH and others for the last five years.
Figure 8.6 indicates the statistics indicating the changes of water quality parameters like pH, EC and nitrate over the last 5 years.
Figure 8.7 indicates the statistics indicating the changes of water quality parameters like magnesium, sodium and potassium over the last 5 years.
Figure 8.8 indicates the statistics indicating the changes of water quality parameters like calcium, TDS and hardness over the last 5 years.
The multivariate analysis model (this model) is developed using Linear regression improved with Pearson Co-efficient for forecasting the water quality parameters’ values. These forecasted values are then inputted to the Naïve Baye’s classifier method that classifies the water sample as potable or not. The accuracy for forecasting the values is 87% and the accuracy for classification is 92% compared to the univariant model [41] which had an accuracy of 85%.
The results obtained consisted of all the results expected at the beginning of the project. The forecasting of ground-water level gave a minimal root mean square error and had an accuracy of 85% and the potability classification of ground water gave us good accuracy of nearly 92%. The ground-water quality data for each well was considered for 20 years and the salt contents were forecasted for the upcoming year. Similarly the water level data was for 10 years and the levels were forecasted for the upcoming year.
CONCLUSION AND FUTURE ENHANCEMENT
Groundwater is a critical component. Correct and accurate estimation of this resource temporally is the one of the true and important focus of this project. This aids, not only in sustainable use but also in the formulation of new legislation, regulation and rules governing the groundwater resources in Karnataka. Groundwater resource estimation is one of the primary objectives of the Central Ground Water Board, Bangalore.
Amongst the 32 wells of Chikballapur, two wells are taken for implementation, as the remaining ones do not have satisfactory data for training and testing. The results obtained show that the accuracy of forecasting the water quality parameters of 87%. The forecasting of ground water level has an accuracy of 85%. The classification results show good accuracy of around 92%. This forecasting of groundwater levels and classification of groundwater for potability is made accessible to the concerned authorities as well as the common man through web application.
With the results and analysis following conclusions are made:
It is not possible to achieve complete accuracy especially in the field of prediction. While it takes many professional hands working tirelessly for a long time to deliver nearly perfect software with optimum trade-offs between performance, accuracy and speed. However, with every success come certain limitations. The limitations are as follows:
Enhancements are a necessary key to continuous improvement and increased efficiency. Stated below are a few enhancements or future work that can be incorporated/ achieved in this project:
[1] Julia Murphy and Max Roser (2017) – ‘Internet’. Published online at OurWorldInData.org. Retrieved from: https://ourworldindata.org/internet/ [Online Resource]
[2] QUIC at 10,000 feet. https://docs.google.com/document/d/ 1gY9-YNDNAB1eip-RTPbqphgySwSNSDHLq9D5Bty4FSU/ edit
[3] CUBIC for Fast Long-Distance Networks. https:// tools.ietf.org/html/draft-rhee-tcpm-cubic-02.
[4] Najafabadi, Alireza Taravat. “Preparing of groundwater maps by using Geographical Information System.” IEEE 17th International Conference on Geoinformatics, Fairfax, Virginia, 2009, pp. 1-4.
[5] Cong, Fangjie, and Yanfang Diao. “Development of Dalian Groundwater Resources Management Geographical Information System”, IEEE International conference on Web Information Systems and Mining (WISM), Sanya, China, Vol. 1, 2010, pp. 388-390.
[6] Weihong, Zhang, Zhao Yongsheng, Dong Jun, and Hong Mei. “An early warning system for groundwater pollution based on GIS.” IEEE International Symposium on Water Resource and Environmental Protection (ISWREP), Xi’an, Shaanxi Province, China, vol. 4, 2011, pp. 2773-2776.
[7] Yang, Zhenghua, Yanping Liu, and Haowen Yan. “GIS-based analysis and visualization of the groundwater changes in Minqin Oasis“. IEEE International Symposium on Geomatics for Integrated Water Resources Management (GIWRM), Lanzhou, China, 2012, pp. 1-4. .
[8] Song, Mei, and Wang Lijie. “Study on groundwater optimization model based on GIS.”, IEEE 5th International Joint Conference on INC, IMS and IDC, Seoul, South Korea, 2011, pp. 977-980.
[9] Sonam Pal, Rakesh Sharma.” An Intelligent Decision Support System for Establishment of New Organization on Any Geographical Area Using GIS”, International Journal on Advanced Research in Computer Science and Software Engineering, Volume 3, Issue 8, August 2013, pp 45-49.
[10] Mogaji, K. A., H. S. Lim, and K. Abdullah. “Modeling groundwater vulnerability prediction using geographic information system (GIS)-based ordered weighted average
(OWA) method and DRASTIC model theory hybrid approach”, Arabian Journal of Geosciences, no. 12, June 2014, pp 5409-5429.
[11] Efren L. Linan, Victor B. Ella, Leonardo M. Florece,” DRASTIC Model and GIS-Based Assessment of Groundwater Vulnerability to Contaminationin Boracay Island, Philippines”, Proceeding of the Intl Conf. on Advances In Applied Science and Environmental Engineering,2014.
[12] Muzafar N. Teli, Nisar A. Kuchhay,” Spatial Interpolation Technique For Groundwater Quality Assessment Of District Anantnag J&K”, International Journal of Engineering Research and Development, Volume 10, Issue 3, March 2014, pp 55-66.
[13] Yiteng Huang, Jacob Benesty, Jingdong Chen Blind,” Using the Pearson correlation coefficient to develop an optimally weighted cross relation based blind SIMO identification algorithm”, IEEE International Conference on Acoustics, Speech and Signal Processing
,Taipei, Taiwan, 2013, pp 3153-3156.
[14] M.Kannan, S.Prabhakaran and P.Ramachandran, “Rainfall Forecasting Using Data Mining Technique”, International Journal of Engineering and Technology, Vol.2, no.6, July 2010, pp 397-401.
[15] Changjun Zhu, Qinghua Liu, “Evaluation of water Quality using Grey Clustering”, Second International workshop on Knowledge Discovery and Data Mining, July 2009, pp 10- 13.
[16] Andrew Kusiak, Xiupeng Wei, Anoop Prakash Verma and Evan Roz, “Modeling and Prediction of Quality Using Radar Reflectivity Data: A Data-Mining Approach”, IEEE Transactions on Geoscience and Remote Sensing, Vol.51, no.4, April 2013, pp 796-807.
[17] Lougheed, Vanessa L., Barb Crosbie, and Patricia Chow-Fraser, “Predictions on the effect of common carp (Cyprinus carpio) exclusion on water quality, zooplankton, and submergent macrophytes in a Great Lakes wetland” Canadian Journal of Fisheries and Aquatic Sciences, Volume 55, Issue 5, January 2008, pp 1189-1197.
[18] Giardino, Claudia, et al. “Assessment of water quality in Lake Garda (Italy) using Hyperion.” Remote Sensing of Environment,Volume 109, Issue 2, December 2007, pp 183-
195.
[19] Wu, Wen-Jie, and Yan Xu. “Correlation analysis of visual verbs’ subcategorization based on Pearson’s correlation coefficient.” IEEE International Conference on Machine Learning and Cybernetics (ICMLC), Qingdao, China, Volume 4, June 2010, pp 2042-2046.
[20] Simin Li, Nannan Zhao, Zhennan Shi, Fengbing Tang, “Application of Artificial Neural Network on Water Quality Evaluation of Fuyang River in Handan City”, IEEE Transactions on neural network, Volume 7, no. 6, May 2010, pp 246-256.
[21] Wang, Bo, et al. “Finding Correlated item pairs through efficient pruning with a given threshold.” The Ninth International Conference on Web-Age Information Management (WAIM), Zhangjiajie Hunan, China, July 2008, pp 413-420.
[22] Scott Armstrong J., Fred Collopy, “Error Measures For Generalizing About Forecasting Methods: Empirical Comparisons”, International Journal of Forecasting, Volume 26, Issue 1, January 2008, pp 69-80.
[23] Ramakrishnaiah, C. R., C. Sadashivaiah, and G. Ranganna. “Assessment of water quality index for the groundwater in Tumkur Taluk, Karnataka State, India.” Journal of Chemistry, October 2009, pp 523-530.
[24] Ruiz-Luna, A., and C. A. Berlanga-Robles. “Modifications in coverage patterns and land use around the Huizache-Caimanero lagoon system, Sinaloa, Mexico: a multi-temporal analysis using LANDSAT images” Journal of Estuarine, Coastal and Shelf Science, July 1999, Volume 49, Issue 1, pp 37-44.
[25] Zhang, Qiang, et al. “Observed trends of annual maximum water level and streamflow during past 130 years in the Yangtze River basin, China.” International Journal of Hydrology, Volume 324, Issue 4, March 2006, pp 255-265.
[26] Lou, Wengao, and Shuryo Nakai. “Application of artificial neural networks for predicting the thermal inactivation of bacteria: a combined effect of temperature, pH and water activity.” Journal on Food Research International, Volume 34, Issue 1, July 2001, pp 573-579.
[27] Shoba G, Shobha G, “Rainfall Prediction Using Data Mining Techniques”, International Journal of Engg and Computer Science, Volume 3, Issue 5, May 2014, pp 6206-6211.
[28] David Meyer, “Support Vector Machines”, Technische University, Wien, Austria.
[29] Ian Sommerville, “Software Engineering”, Pearson Publications, 9th Edition, 2011, ISBN: 978-013703515.
[30] Vipin Kumar, “Introduction to Data Mining”, Pearson Publications, 2011, ISBN: 978- 81317-1472-0.
[31] Herbert Schildt, “Java: The Complete Reference”, McGraw Hill Publications, 8th edition, ISBN: 9780072123548.
[32] Jim Keogh, “Java EE: The Complete reference”, McGraw Hill Publications, 5th edition, ISBN: 007222472X.
[33] Patrick Naughton, “The Java Handbook”, McGraw Hill Publications, ISBN:0- 07882199-1.
[34] Tim Lindholm, Bill Joy, “The Java Virtual Machine Specification”, Addison-Wesley Publications, ISBN: 0-201-69581-2.
[35] Zhu Xiaochun, Zhou Bo, “A test Automation solution on GUI Functional test”, IEEE international conference on industrial informatics, July 2008, pages 1413-1418.
[36] Benesty J, Jingdong Chen, Yiteng Huang, “On the importance of Pearson correlation coefficient in noise reduction”, IEEE transactions, May 2008, pages 757-765.
[37] Simon Kendal, “Object oriented programming with Java”, Pearson Publications, 1st edition, ISBN: 978-87-7681-501-1.
[38] Oracle whitepaper, “Advanced Java Diagnostics and Monitoring without performance overhead”, September 2013.
[39] Agarwal G., Li J.,Su Q., “Evaluating a demand driven technique for call graph construction”, 2002, pages 29-45.
[40] Forman, I.R., Forman, N., “Java Reflection in Action”, Manning Publications (2004).
[41] Shoba G, Shobha G, “Water Quality Prediction Using Data Mining techniques: A Survey”, International Journal Of Engineering And Computer Science, Volume 3 Issue 6, June 2014, pp 6299-6306.
[42] G. Shobha, Jayavardhana Gubbi, Krishna S Raghavan, ”A novel fuzzy rule based system for assessment of ground water potability: A case study in South India”, IOSR Journal of Computer Engineering (IOSR-JCE), Volume 15, Issue 2, December 2013, pp 35-41.
[43] Hal Daumé III, “A Course in Machine Learning”, Version 0.8, August 2012.
[44] Tom M. Mitchell, “The Discipline of Machine Learning”, School of Computer Science, Carnegie Mellon University, Pittsburgh, July 2006, pp 312-317.
[45] Mu Huaxin, Jiang Shuai, “Design Patterns in Software Development”, Proceeding of the IEEE 2nd International Conference on Software Engineering and Service Sciences, Beijing, July 2011, pp 15-17.
[46] Rod Johnson, JuergenHoeller, AlefArendsen, “Overview of Spring Framework”, in Introduction to the Spring Framework, December 2005, pp 5-24.
[47] Jeff Linwood and Dave Minter, “An Introduction to Hibernate in Beginning Hibernate, An introduction to persistence using hibernate” , Springer, 2nd edition, June 2010, pp 1-10.
[48] Ron Patton,” Software Testing”, Pearson Education India, 2nd Edition, 2006.
[49] Susan Hyde, Thad Dunning, “The Analysis of Experimental Data: Comparing Techniques”, Proceeding of the Annual Meeting of American Political Science Association, Boston, December 2008, pp 233-242.
[50] Stanford University, “Introduction to Machine learning” by Andrew Ng.
[51] Marina Sokolova, Nathalie Japkowicz, Stan Szpakowicz, “Beyond Accuracy, F-Score and ROC: A Family of Discriminant Measures for Performance Evaluation”, Advances in Artificial Intelligence, Vol 43, no.4, June 2006, pp 1015-1021.
Appendix A – Screen Shots of FCGWS
Figure A.1 shows the Home page of the web application. The options are given for the user of the system for viewing the forecasting of water level and water quality potability can be are provided. The admin of the system can login and insert the values of water quality and water level for the upcoming years.
Figure A.2 displays home page of water quality potability forecasting module to browse the respective quality data. This is for all for cases under consideration as illustrated in the figure below. This also shows the Google maps that consists of well number indicating the potability status.
Figure A.3 displays home page of water level forecasting module to browse the respective water level data. This is for all for cases under consideration as illustrated in the figure below. This also shows the Google maps that consists of well number indicating the water level for after all the three seasons.
Figure A.4 illustrates the result page of forecasting of water quality module for upcoming year. On selecting the well number and submitting in Figure A.2, forecasted result for the next year is displayed. This also indicates the status of potability and indicates the values that are responsible for non potability indicated in red.
Figure A.5 illustrates the result page of forecasting of water level module for upcoming year. On selecting the well number and submitting in Figure A.2, forecasted result for the next year is displayed.
Figure A.6 shows the home page of the admin of the system. Admin alone has the rights to update the water quality and water level data and also insert new taluk and well.
Figure A.7 shows the GIS maps for different parameters for three different wells. Thus showing the spatial representation and drawing a comparison between the values.
Figure A.8 shows the testing of the water quality parameters for the year 2015, thus proving an accuracy of 87% for forecasting.
Figure A.9 shows the testing of the water level for the year 2015, thus proving an accuracy of 85% for forecasting.
You Want The Best Grades and That’s What We Deliver
Our top essay writers are handpicked for their degree qualification, talent and freelance know-how. Each one brings deep expertise in their chosen subjects and a solid track record in academic writing.
We offer the lowest possible pricing for each research paper while still providing the best writers;no compromise on quality. Our costs are fair and reasonable to college students compared to other custom writing services.
You’ll never get a paper from us with plagiarism or that robotic AI feel. We carefully research, write, cite and check every final draft before sending it your way.