Data Processing
Data processing is vital in most organizations, for instance, in transport, banking, and education sector with large volumes of data. The process of data processing transforms raw data into information done manually, through electronic devices or mechanically. Turning data into a meaningful output makes it easier to create reports, reduces the cost, and enhances easy and safe storage of data (Gu, et ,al., 2016). The paper is a discussion of data processing and challenges associated with using data from different sources.
Data abstraction of clinical record is vital for medical record management, such as the use of information for future reference and patient’s appointment. Data extraction involves the use of various methods, for instance, scanning and the use of electronic health records to extract information from digital devices as well as paper documents into a data abstraction system or the electronic health record (Spratling, and Powers, 2017). Data abstraction is an essential way of backing up data in case of a security attack. The use of electronic health records uses cloud-based services for storage and access of data, development of better medical charts compared to the original maps, as well as uses less space compared to the unique room of clinical records. Additionally, data is abstracted through scanning the original documents to obtain a copy of the information by the use of a scanner. Other methods include photocopying or sending information to external devices such as hard disks.
The data normalization process involves the application of techniques and a set of guidelines to reduce data redundancy through the use of a standard form. Some of the methods of reducing data redundancy are through the use of smaller and manageable tables or logical database design. The main goal of the normalization process is to avoid duplication of data that would affect data analyzation as well as grouping data, especially when data is related to each other. Logical database design involves the normalization of data through the arrangement of data into groups that can be easily understood and maintained (Vatsalan, et, al., 2017. Consistent database design targets the user of the database by making the data easy to use as well as easy to analyze and interpret.
The data reconciliation process involves verification and comparison of data between the data source and the data target. Data reconciliation takes place during data migration where the process focuses on data observability, verification of redundancy, evaluating variability, as well as identifying gross errors such as system failures and data bias. The data reconciliation process involves master data reconciliation, transactional data reconciliation, and automated data reconciliation. Master data reconciliation consists of validating the master data through checking the number of active and inactive users, number of customers in source and target as well as the number of rows (Cannon, Lefebvre, and Drillock, 2019). The transactional process involves validating transactional data by analyzing the total sum and any mismatch with the source data. Finally, the automated reconciliation process loads data and provides information about data validity to stakeholders.
Multiple data sources come with various challenges, for instance, handling changes in data frequently and data mapping. Data from multiple sources can be altered anytime through updating or elimination of information into the database. For the user or host to keep up with the current data, average level mapping is vital (Vatsalan, et, al., 2017). Data mapping must be applied, which s an extra activity in the data processing. Frequent data mapping compares the similarity and validity of information acquired from one source to another. Multiple data sources lead to weak and ineffective reports for research, which could affect the validity of the study. Additionally, inconsistency in various sources, where multiple sources tend to be more reliable than others, therefore, changing the consistency of data.
Data processing is a vast area that includes various processes such as data abstraction, normalization, as well as data reconciliation. Getting data from multiple sources is challenging, especially in data consistency, validity, and frequent data changes.

✏️ Tackling a Similar Assignment?

Get a Custom-Written Paper Delivered to Your Inbox

Our subject-specialist writers craft plagiarism-free, rubric-matched papers from scratch — available for students in Australia, UK, UAE, Kuwait, Canada and USA.

Start My Order →Use SAVE20 — 20% off first order

References

⏰️ Deadline Pressure?

Australia Assessments Writers Are Online Right Now

Thousands of students at universities from RMIT to UCL to AUM Kuwait submit with confidence using our expert writing service. Human-written, Turnitin-safe, on time.

Cannon, T., Lefebvre, M., & Drillock, G. (2019). U.S. Patent No. 10,182,083. Washington, DC: U.S. Patent and Trademark Office.
Cencic, O., & Frühwirth, R. (2018). Data reconciliation of nonnormal observations with nonlinear constraints. Journal of Applied Statistics, 45(13), 2411-2428.
Gu, B., Yoon, A. S., Bae, D. H., Jo, I., Lee, J., Yoon, J., … & Jeong, J. (2016). Biscuit: A framework for near-data processing of big data workloads. ACM SIGARCH Computer Architecture News, 44(3), 153-165.
Spratling, R., & Powers, E. (2017). Development of a Data Abstraction Form: Getting What You Need From the Electronic Health Record. Journal of Pediatric Health Care, 31(1), 126-130.
Vatsalan, D., Sehili, Z., Christen, P., & Rahm, E. (2017). Privacy-preserving record linkage for big data: Current approaches and research challenges. In Handbook of Big Data Technologies (pp. 851-895). Springer, Cham.

100% Plagiarism-Free
PhD & Master's Writers
On-Time Delivery
Free Unlimited Revisions
APA / Harvard / MLA
256-bit SSL Secure
Verified Academic Expert
This article was written and reviewed by a verified academic professional with postgraduate qualifications. All content is original, evidence-based, and written to assist students in Australia, UK, UAE (AUM Kuwait), Canada, and USA.

Frequently Asked Questions

Yes — our service is legally available to students across Australia (RMIT, UniMelb, ANU), UK (UCL, Manchester), Canada (UofT, UBC), UAE, Kuwait (AUM), and the USA. We provide original model papers for reference and learning purposes, 100% confidential.

Get My Paper Written →

Yes. Every paper is written entirely from scratch by a human expert — not AI-generated or recycled. Our human-written papers typically achieve under 8% similarity on Turnitin. A free plagiarism report is available on request.

Get My Paper Written →

We accept orders with deadlines as short as 3 hours for standard essays and from 24 hours for research papers and dissertation chapters. Our 98.4% on-time delivery record speaks for itself.

Get My Paper Written →

We cover all levels from undergraduate through PhD across 100+ subjects including Nursing, Law, Business, Engineering, Computer Science, Education, Psychology, Marketing, and STEM disciplines.

Get My Paper Written →

Absolutely. Your name, email, institution, and payment details are never shared with third parties. All payments are PCI-compliant and 256-bit SSL encrypted. Your order is fully confidential.

Get My Paper Written →