[PDF] Step By Step Tutorial Sql Server For Data Science With Python Gui - eBooks Review

Step By Step Tutorial Sql Server For Data Science With Python Gui


Step By Step Tutorial Sql Server For Data Science With Python Gui
DOWNLOAD

Download Step By Step Tutorial Sql Server For Data Science With Python Gui PDF/ePub or read online books in Mobi eBooks. Click Download or Read Online button to get Step By Step Tutorial Sql Server For Data Science With Python Gui book now. This website allows unlimited access to, at the time of writing, more than 1.5 million titles, including hundreds of thousands of titles in various foreign languages. If the content not found or just blank you must refresh this page



Step By Step Tutorial Sql Server For Data Science With Python Gui


Step By Step Tutorial Sql Server For Data Science With Python Gui
DOWNLOAD
Author : Vivian Siahaan
language : en
Publisher: BALIGE PUBLISHING
Release Date : 2022-11-13

Step By Step Tutorial Sql Server For Data Science With Python Gui written by Vivian Siahaan and has been published by BALIGE PUBLISHING this book supported file pdf, txt, epub, kindle and other format this book has been release on 2022-11-13 with Computers categories.


This book uses the SQL SERVER version of MySQL-based Northwind database. The Northwind database is a sample database that was originally created by Microsoft and used as the basis for their tutorials in a variety of database products for decades. The Northwind database contains the sales data for a fictitious company called “Northwind Traders,” which imports and exports specialty foods from around the world. The Northwind database is an excellent tutorial schema for a small-business ERP, with customers, orders, inventory, purchasing, suppliers, shipping, employees, and single-entry accounting. The Northwind database has since been ported to a variety of non-Microsoft databases, including SQL SERVER. The Northwind dataset includes sample data for the following: Suppliers: Suppliers and vendors of Northwind; Customers: Customers who buy products from Northwind; Employees: Employee details of Northwind traders; Products: Product information; Shippers: The details of the shippers who ship the products from the traders to the end-customers; and Orders and Order_Details: Sales Order transactions taking place between the customers & the company. In this project, you will write Python script to create every table and insert rows of data into each of them. You will develop GUI with PyQt5 to each table in the database. You will also create GUI to plot: case distribution of order date by year, quarter, month, week, day, and hour; the distribution of amount by year, quarter, month, week, day, and hour; the distribution of bottom 10 sales by product, top 10 sales by product, bottom 10 sales by customer, top 10 sales by customer, bottom 10 sales by supplier, top 10 sales by supplier, bottom 10 sales by customer country, top 10 sales by customer country, bottom 10 sales by supplier country, top 10 sales by supplier country, average amount by month with mean and ewm, average amount by every month, amount feature over June 1997, amount feature over 1998, and all amount feature.



Tkinter Data Science And Machine Learning


Tkinter Data Science And Machine Learning
DOWNLOAD
Author : Vivian Siahaan
language : en
Publisher: BALIGE PUBLISHING
Release Date : 2023-09-02

Tkinter Data Science And Machine Learning written by Vivian Siahaan and has been published by BALIGE PUBLISHING this book supported file pdf, txt, epub, kindle and other format this book has been release on 2023-09-02 with Computers categories.


In this project, we embarked on a comprehensive journey through the world of machine learning and model evaluation. Our primary goal was to develop a Tkinter GUI and assess various machine learning models on a given dataset to identify the best-performing one. This process is essential in solving real-world problems, as it helps us select the most suitable algorithm for a specific task. By crafting this Tkinter-powered GUI, we provided an accessible and user-friendly interface for users engaging with machine learning models. It simplified intricate processes, allowing users to load data, select models, initiate training, and visualize results without necessitating code expertise or command-line operations. This GUI introduced a higher degree of usability and accessibility to the machine learning workflow, accommodating users with diverse levels of technical proficiency. We began by loading and preprocessing the dataset, a fundamental step in any machine learning project. Proper data preprocessing involves tasks such as handling missing values, encoding categorical features, and scaling numerical attributes. These operations ensure that the data is in a format suitable for training and testing machine learning models. Once our data was ready, we moved on to the model selection phase. We evaluated multiple machine learning algorithms, each with its strengths and weaknesses. The models we explored included Logistic Regression, Random Forest, K-Nearest Neighbors (KNN), Decision Trees, Gradient Boosting, Extreme Gradient Boosting (XGBoost), Multi-Layer Perceptron (MLP), and Support Vector Classifier (SVC). For each model, we employed a systematic approach to find the best hyperparameters using grid search with cross-validation. This technique allowed us to explore different combinations of hyperparameters and select the configuration that yielded the highest accuracy on the training data. These hyperparameters included settings like the number of estimators, learning rate, and kernel function, depending on the specific model. After obtaining the best hyperparameters for each model, we trained them on our preprocessed dataset. This training process involved using the training data to teach the model to make predictions on new, unseen examples. Once trained, the models were ready for evaluation. We assessed the performance of each model using a set of well-established evaluation metrics. These metrics included accuracy, precision, recall, and F1-score. Accuracy measured the overall correctness of predictions, while precision quantified the proportion of true positive predictions out of all positive predictions. Recall, on the other hand, represented the proportion of true positive predictions out of all actual positives, highlighting a model's ability to identify positive cases. The F1-score combined precision and recall into a single metric, helping us gauge the overall balance between these two aspects. To visualize the model's performance, we created key graphical representations. These included confusion matrices, which showed the number of true positive, true negative, false positive, and false negative predictions, aiding in understanding the model's classification results. Additionally, we generated Receiver Operating Characteristic (ROC) curves and area under the curve (AUC) scores, which depicted a model's ability to distinguish between classes. High AUC values indicated excellent model performance. Furthermore, we constructed true values versus predicted values diagrams to provide insights into how well our models aligned with the actual data distribution. Learning curves were also generated to observe a model's performance as a function of training data size, helping us assess whether the model was overfitting or underfitting. Lastly, we presented the results in a clear and organized manner, saving them to Excel files for easy reference. This allowed us to compare the performance of different models and make an informed choice about which one to select for our specific task. In summary, this project was a comprehensive exploration of the machine learning model development and evaluation process. We prepared the data, selected and fine-tuned various models, assessed their performance using multiple metrics and visualizations, and ultimately arrived at a well-informed decision about the most suitable model for our dataset. This approach serves as a valuable blueprint for tackling real-world machine learning challenges effectively.



Data Visualization Time Series Forecasting And Prediction Using Machine Learning With Tkinter


Data Visualization Time Series Forecasting And Prediction Using Machine Learning With Tkinter
DOWNLOAD
Author : Vivian Siahaan
language : en
Publisher: BALIGE PUBLISHING
Release Date : 2023-09-06

Data Visualization Time Series Forecasting And Prediction Using Machine Learning With Tkinter written by Vivian Siahaan and has been published by BALIGE PUBLISHING this book supported file pdf, txt, epub, kindle and other format this book has been release on 2023-09-06 with Computers categories.


This "Data Visualization, Time-Series Forecasting, and Prediction using Machine Learning with Tkinter" project is a comprehensive and multifaceted application that leverages data visualization, time-series forecasting, and machine learning techniques to gain insights into bitcoin data and make predictions. This project serves as a valuable tool for financial analysts, traders, and investors seeking to make informed decisions in the stock market. The project begins with data visualization, where historical bitcoin market data is visually represented using various plots and charts. This provides users with an intuitive understanding of the data's trends, patterns, and fluctuations. Features distribution analysis is conducted to assess the statistical properties of the dataset, helping users identify key characteristics that may impact forecasting and prediction. One of the project's core functionalities is time-series forecasting. Through a user-friendly interface built with Tkinter, users can select a stock symbol and specify the time horizon for forecasting. The project supports multiple machine learning regressors, such as Linear Regression, Decision Trees, Random Forests, Gradient Boosting, Extreme Gradient Boosting, Multi-Layer Perceptron, Lasso, Ridge, AdaBoost, and KNN, allowing users to choose the most suitable algorithm for their forecasting needs. Time-series forecasting is crucial for making predictions about stock prices, which is essential for investment strategies. The project employs various machine learning regressors to predict the adjusted closing price of bitcoin stock. By training these models on historical data, users can obtain predictions for future adjusted closing prices. This information is invaluable for traders and investors looking to make buy or sell decisions. The project also incorporates hyperparameter tuning and cross-validation to enhance the accuracy of these predictions. These models employ metrics such as Mean Absolute Error (MAE), which quantifies the average absolute discrepancy between predicted values and actual values. Lower MAE values signify superior model performance. Additionally, Mean Squared Error (MSE) is used to calculate the average squared differences between predicted and actual values, with lower MSE values indicating better model performance. Root Mean Squared Error (RMSE), derived from MSE, provides insights in the same units as the target variable and is valued for its lower values, denoting superior performance. Lastly, R-squared (R2) evaluates the fraction of variance in the target variable that can be predicted from independent variables, with higher values signifying better model fit. An R2 of 1 implies a perfect model fit. In addition to close price forecasting, the project extends its capabilities to predict daily returns. By implementing grid search, users can fine-tune the hyperparameters of machine learning models such as Random Forests, Gradient Boosting, Support Vector, Decision Tree, Gradient Boosting, Extreme Gradient Boosting, Multi-Layer Perceptron, and AdaBoost Classifiers. This optimization process aims to maximize the predictive accuracy of daily returns. Accurate daily return predictions are essential for assessing risk and formulating effective trading strategies. Key metrics in these classifiers encompass Accuracy, which represents the ratio of correctly predicted instances to the total number of instances, Precision, which measures the proportion of true positive predictions among all positive predictions, and Recall (also known as Sensitivity or True Positive Rate), which assesses the proportion of true positive predictions among all actual positive instances. The F1-Score serves as the harmonic mean of Precision and Recall, offering a balanced evaluation, especially when considering the trade-off between false positives and false negatives. The ROC Curve illustrates the trade-off between Recall and False Positive Rate, while the Area Under the ROC Curve (AUC-ROC) summarizes this trade-off. The Confusion Matrix provides a comprehensive view of classifier performance by detailing true positives, true negatives, false positives, and false negatives, facilitating the computation of various metrics like accuracy, precision, and recall. The selection of these metrics hinges on the project's specific objectives and the characteristics of the dataset, ensuring alignment with the intended goals and the ramifications of false positives and false negatives, which hold particular significance in financial contexts where decisions can have profound consequences. Overall, the "Data Visualization, Time-Series Forecasting, and Prediction using Machine Learning with Tkinter" project serves as a powerful and user-friendly platform for financial data analysis and decision-making. It bridges the gap between complex machine learning techniques and accessible user interfaces, making financial analysis and prediction more accessible to a broader audience. With its comprehensive features, this project empowers users to gain insights from historical data, make informed investment decisions, and develop effective trading strategies in the dynamic world of finance. You can download the dataset from: http://viviansiahaan.blogspot.com/2023/09/data-visualization-time-series.html.



Data Analysis Using Jdbc And Sql Server With Object Oriented Approach And Apache Netbeans Ide


Data Analysis Using Jdbc And Sql Server With Object Oriented Approach And Apache Netbeans Ide
DOWNLOAD
Author : Vivian Siahaan
language : en
Publisher: BALIGE PUBLISHING
Release Date : 2023-05-24

Data Analysis Using Jdbc And Sql Server With Object Oriented Approach And Apache Netbeans Ide written by Vivian Siahaan and has been published by BALIGE PUBLISHING this book supported file pdf, txt, epub, kindle and other format this book has been release on 2023-05-24 with Computers categories.


This book is SQL SERVER version of our previous book titled “DATA ANALYSIS USING JDBC AND MYSQL WITH OBJECT-ORIENTED APPROACH AND APACHE NETBEANS IDE”. In this project, you will use the SQL VERSION version of Northwind database which is a sample database that was originally created by Microsoft and used as the basis for their tutorials in a variety of database products for decades. The Northwind database contains the sales data for a fictitious company called “Northwind Traders,” which imports and exports specialty foods from around the world. The Northwind database is an excellent tutorial schema for a small-business ERP, with customers, orders, inventory, purchasing, suppliers, shipping, employees, and single-entry accounting. You can download the sample database from https://viviansiahaan.blogspot.com/2023/05/data-analysis-using-jdbc-and-sql-server.html. In this project, you will design the form for every table and you will plot: the territory distribution by region; the employee distributions based on city, country, title, and region; the employee distributions based on birth date, hire date, and employee name; the employee distributions based on city, country, territory, and region; the three supplier distributions based on city, region, and country; the product distributions based on city, region, country, categorized unit price, categorized units in stock, and categorized units on order; the customer distributions based on city, region, and country; the order and freight distributions based on year, month, and week; the order and freight distributions based on day, quarter, and ship country; the order and freight distributions based on ship region, ship city, and ship name; the order and freight distributions based on shipper company, customer company, and customer city; the order and freight distributions based on customer country, employee name, and employee title; the sales distributions based on year, month, week, day, quarter, and ship country; the sales distributions based on ship region, ship city, ship name, shipper company, customer company, and customer city; the sales distributions based on customer region, customer country, employee name, employee title, employee city, and employee country; the sales distributions based on product name, category name, supplier company, supplier city, supplier region, and supplier country.



Step By Step Tutorial Java Mysql With Object Oriented Programming Using Apache Netbeans Ide Part 2


Step By Step Tutorial Java Mysql With Object Oriented Programming Using Apache Netbeans Ide Part 2
DOWNLOAD
Author : Vivian Siahaan
language : en
Publisher: BALIGE PUBLISHING
Release Date : 2023-01-30

Step By Step Tutorial Java Mysql With Object Oriented Programming Using Apache Netbeans Ide Part 2 written by Vivian Siahaan and has been published by BALIGE PUBLISHING this book supported file pdf, txt, epub, kindle and other format this book has been release on 2023-01-30 with Computers categories.


The sakila database consists of 15 tables including film, film_category, actor, customer, rental, payment and inventory among others. The sakila sample database, which is a fictitious database designed to represent a DVD rental store, is intended to provide a standard schema that can be used for examples in books, tutorials, articles, samples, and so forth. Our previous book, part 1, implements the first six tables in sakila database: actor, language, film, category, film_category, and film_actor tables. This book, as second part, uses five tables in the sakila sample database: country, city, address, store, and staff tables.



Step By Step Tutorial Java Mysql With Object Oriented Programming Using Apache Netbeans Ide Part 3


Step By Step Tutorial Java Mysql With Object Oriented Programming Using Apache Netbeans Ide Part 3
DOWNLOAD
Author : Vivian Siahaan
language : en
Publisher: BALIGE PUBLISHING
Release Date : 2023-02-08

Step By Step Tutorial Java Mysql With Object Oriented Programming Using Apache Netbeans Ide Part 3 written by Vivian Siahaan and has been published by BALIGE PUBLISHING this book supported file pdf, txt, epub, kindle and other format this book has been release on 2023-02-08 with Computers categories.


The sakila database consists of 15 tables including film, film_category, actor, customer, rental, payment and inventory among others. The sakila sample database, which is a fictitious database designed to represent a DVD rental store, is intended to provide a standard schema that can be used for examples in books, tutorials, articles, samples, and so forth. Our books, part 1 and part 2, had been published implementing the first eleven tables in sakila database: actor, language, film, category, film_category, film_actor, country, city, address, store, and staff tables. This book, as part 3, develops step by step object-oriented programming and Java GUI tutorial using NetBeans to implement the remaining four tables, customer, inventory, rental, and payment, in the Sakila sample database which is a fictitious database designed to represent a DVD rental store.



Step By Step Tutorial Java Mysql With Object Oriented Programming Using Apache Netbeans Ide Part 1


Step By Step Tutorial Java Mysql With Object Oriented Programming Using Apache Netbeans Ide Part 1
DOWNLOAD
Author : Vivian Siahaan
language : en
Publisher: BALIGE PUBLISHING
Release Date : 2023-01-22

Step By Step Tutorial Java Mysql With Object Oriented Programming Using Apache Netbeans Ide Part 1 written by Vivian Siahaan and has been published by BALIGE PUBLISHING this book supported file pdf, txt, epub, kindle and other format this book has been release on 2023-01-22 with Computers categories.


This book uses six tables in the Sakila sample database which is a fictitious database designed to represent a DVD rental store. The database consists of 15 tables including film, film_category, actor, customer, rental, payment and inventory among others. The Sakila sample database is intended to provide a standard schema that can be used for examples in books, tutorials, articles, samples, and so forth. In this book, as part 1, you will develop step by step tutorial object-oriented programming and Java GUI using NetBeans to implement the first six tables in sakila database: actor, language, film, category, film_category, and film_actor tables.



Part 1 3 Step By Step Tutorial Java Mysql With Object Oriented Programming Using Apache Netbeans Ide


Part 1 3 Step By Step Tutorial Java Mysql With Object Oriented Programming Using Apache Netbeans Ide
DOWNLOAD
Author : Vivian Siahaan
language : en
Publisher: BALIGE PUBLISHING
Release Date : 2023-02-11

Part 1 3 Step By Step Tutorial Java Mysql With Object Oriented Programming Using Apache Netbeans Ide written by Vivian Siahaan and has been published by BALIGE PUBLISHING this book supported file pdf, txt, epub, kindle and other format this book has been release on 2023-02-11 with Computers categories.


PART 1: This book uses six tables in the Sakila sample database which is a fictitious database designed to represent a DVD rental store. The database consists of 15 tables including film, film_category, actor, customer, rental, payment and inventory among others. The Sakila sample database is intended to provide a standard schema that can be used for examples in books, tutorials, articles, samples, and so forth. In this book, as part 1, you will develop step by step tutorial object-oriented programming and Java GUI using NetBeans to implement the first six tables in sakila database: actor, language, film, category, film_category, and film_actor tables. PART 2: The sakila database consists of 15 tables including film, film_category, actor, customer, rental, payment and inventory among others. The sakila sample database, which is a fictitious database designed to represent a DVD rental store, is intended to provide a standard schema that can be used for examples in books, tutorials, articles, samples, and so forth. Our previous book, part 1, implements the first six tables in sakila database: actor, language, film, category, film_category, and film_actor tables. This book, as second part, uses five tables in the sakila sample database: country, city, address, store, and staff tables. PART 3: Our books, part 1 and part 2, had been published implementing the first eleven tables in sakila database: actor, language, film, category, film_category, film_actor, country, city, address, store, and staff tables. This book, as part 3, develops step by step object-oriented programming and Java GUI tutorial using NetBeans to implement the remaining four tables, customer, inventory, rental, and payment, in the Sakila sample database which is a fictitious database designed to represent a DVD rental store.



Data Science With Jdbc And Sqlite Using Object Oriented Approach And Apache Netbeans Ide


Data Science With Jdbc And Sqlite Using Object Oriented Approach And Apache Netbeans Ide
DOWNLOAD
Author : Vivian Siahaan
language : en
Publisher: BALIGE PUBLISHING
Release Date : 2023-03-11

Data Science With Jdbc And Sqlite Using Object Oriented Approach And Apache Netbeans Ide written by Vivian Siahaan and has been published by BALIGE PUBLISHING this book supported file pdf, txt, epub, kindle and other format this book has been release on 2023-03-11 with Computers categories.


In this project, you will develop step by step implementation of JDBC/SQLITE with object-oriented approach using SQLite version of an Oracle sample database named electronics. You will be taught how to plot country distribution in each region; location distribution in each country and region; warehouse distribution in each country, region, and city; product distribution by category name; categorized standard cost and categorized list price values distribution in products table; categorized values in inventories table; employee distribution by job title; customer distribution by categorized credit limit; order distribution by customer employee, status, and by categorized credit limit; the top 10 sales distribution by product name; the top 10 sales distribution by category name; the order distribution by category; and order distribution by status. The electronics database itself is based on a global fictitious company that sells computer hardware including storage, motherboard, RAM, video card, and CPU. You can download the sample database from https://viviansiahaan.blogspot.com/2023/03/book-jdbc-and-sqlite-with-object.html. In the database, the company maintains the product information such as name, description standard cost, list price, and product line. It also tracks the inventory information for all products including warehouses where products are available. Because the company operates globally, it has warehouses in various locations around the world. The company records all customer information including name, address, and website. Each customer has at least one contact person with detailed information including name, email, and phone. The company also places a credit limit on each customer to limit the amount that customer can owe. Whenever a customer issues a purchase order, a sales order is created in the database with the pending status. When the company ships the order, the order status becomes shipped. In case the customer cancels an order, the order status becomes canceled. In addition to the sales information, the employee data is recorded with some basic information such as name, email, phone, job title, manager, and hire date.



Data Science And Deep Learning Workshop For Scientists And Engineers


Data Science And Deep Learning Workshop For Scientists And Engineers
DOWNLOAD
Author : Vivian Siahaan
language : en
Publisher: BALIGE PUBLISHING
Release Date : 2021-11-04

Data Science And Deep Learning Workshop For Scientists And Engineers written by Vivian Siahaan and has been published by BALIGE PUBLISHING this book supported file pdf, txt, epub, kindle and other format this book has been release on 2021-11-04 with Computers categories.


WORKSHOP 1: In this workshop, you will learn how to use TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries to implement deep learning on recognizing traffic signs using GTSRB dataset, detecting brain tumor using Brain Image MRI dataset, classifying gender, and recognizing facial expression using FER2013 dataset In Chapter 1, you will learn to create GUI applications to display line graph using PyQt. You will also learn how to display image and its histogram. In Chapter 2, you will learn how to use TensorFlow, Keras, Scikit-Learn, Pandas, NumPy and other libraries to perform prediction on handwritten digits using MNIST dataset with PyQt. You will build a GUI application for this purpose. In Chapter 3, you will learn how to perform recognizing traffic signs using GTSRB dataset from Kaggle. There are several different types of traffic signs like speed limits, no entry, traffic signals, turn left or right, children crossing, no passing of heavy vehicles, etc. Traffic signs classification is the process of identifying which class a traffic sign belongs to. In this Python project, you will build a deep neural network model that can classify traffic signs in image into different categories. With this model, you will be able to read and understand traffic signs which are a very important task for all autonomous vehicles. You will build a GUI application for this purpose. In Chapter 4, you will learn how to perform detecting brain tumor using Brain Image MRI dataset provided by Kaggle (https://www.kaggle.com/navoneel/brain-mri-images-for-brain-tumor-detection) using CNN model. You will build a GUI application for this purpose. In Chapter 5, you will learn how to perform classifying gender using dataset provided by Kaggle (https://www.kaggle.com/cashutosh/gender-classification-dataset) using MobileNetV2 and CNN models. You will build a GUI application for this purpose. In Chapter 6, you will learn how to perform recognizing facial expression using FER2013 dataset provided by Kaggle (https://www.kaggle.com/nicolejyt/facialexpressionrecognition) using CNN model. You will also build a GUI application for this purpose. WORKSHOP 2: In this workshop, you will learn how to use TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries to implement deep learning on classifying fruits, classifying cats/dogs, detecting furnitures, and classifying fashion. In Chapter 1, you will learn to create GUI applications to display line graph using PyQt. You will also learn how to display image and its histogram. Then, you will learn how to use OpenCV, NumPy, and other libraries to perform feature extraction with Python GUI (PyQt). The feature detection techniques used in this chapter are Harris Corner Detection, Shi-Tomasi Corner Detector, and Scale-Invariant Feature Transform (SIFT). In Chapter 2, you will learn how to use TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries to perform classifying fruits using Fruits 360 dataset provided by Kaggle (https://www.kaggle.com/moltean/fruits/code) using Transfer Learning and CNN models. You will build a GUI application for this purpose. In Chapter 3, you will learn how to use TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries to perform classifying cats/dogs using dataset provided by Kaggle (https://www.kaggle.com/chetankv/dogs-cats-images) using Using CNN with Data Generator. You will build a GUI application for this purpose. In Chapter 4, you will learn how to use TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries to perform detecting furnitures using Furniture Detector dataset provided by Kaggle (https://www.kaggle.com/akkithetechie/furniture-detector) using VGG16 model. You will build a GUI application for this purpose. In Chapter 5, you will learn how to use TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries to perform classifying fashion using Fashion MNIST dataset provided by Kaggle (https://www.kaggle.com/zalando-research/fashionmnist/code) using CNN model. You will build a GUI application for this purpose. WORKSHOP 3: In this workshop, you will implement deep learning on detecting vehicle license plates, recognizing sign language, and detecting surface crack using TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries. In Chapter 1, you will learn how to use TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries to perform detecting vehicle license plates using Car License Plate Detection dataset provided by Kaggle (https://www.kaggle.com/andrewmvd/car-plate-detection/download). In Chapter 2, you will learn how to use TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries to perform sign language recognition using Sign Language Digits Dataset provided by Kaggle (https://www.kaggle.com/ardamavi/sign-language-digits-dataset/download). In Chapter 3, you will learn how to use TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries to perform detecting surface crack using Surface Crack Detection provided by Kaggle (https://www.kaggle.com/arunrk7/surface-crack-detection/download). WORKSHOP 4: In this workshop, implement deep learning-based image classification on detecting face mask, classifying weather, and recognizing flower using TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries. In Chapter 1, you will learn how to use TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries to perform detecting face mask using Face Mask Detection Dataset provided by Kaggle (https://www.kaggle.com/omkargurav/face-mask-dataset/download). In Chapter 2, you will learn how to use TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries to perform how to classify weather using Multi-class Weather Dataset provided by Kaggle (https://www.kaggle.com/pratik2901/multiclass-weather-dataset/download). WORKSHOP 5: In this workshop, implement deep learning-based image classification on classifying monkey species, recognizing rock, paper, and scissor, and classify airplane, car, and ship using TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries. In Chapter 1, you will learn how to use TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries to perform how to classify monkey species using 10 Monkey Species dataset provided by Kaggle (https://www.kaggle.com/slothkong/10-monkey-species/download). In Chapter 2, you will learn how to use TensorFlow, Keras, Scikit-Learn, OpenCV, Pandas, NumPy and other libraries to perform how to recognize rock, paper, and scissor using 10 Monkey Species dataset provided by Kaggle (https://www.kaggle.com/sanikamal/rock-paper-scissors-dataset/download). WORKSHOP 6: In this worksshop, you will implement two data science projects using Scikit-Learn, Scipy, and other libraries with Python GUI. In Chapter 1, you will learn how to use Scikit-Learn, Scipy, and other libraries to perform how to predict traffic (number of vehicles) in four different junctions using Traffic Prediction Dataset provided by Kaggle (https://www.kaggle.com/fedesoriano/traffic-prediction-dataset/download). This dataset contains 48.1k (48120) observations of the number of vehicles each hour in four different junctions: 1) DateTime; 2) Juction; 3) Vehicles; and 4) ID. In Chapter 2, you will learn how to use Scikit-Learn, NumPy, Pandas, and other libraries to perform how to analyze and predict heart attack using Heart Attack Analysis & Prediction Dataset provided by Kaggle (https://www.kaggle.com/rashikrahmanpritom/heart-attack-analysis-prediction-dataset/download). WORKSHOP 7: In this workshop, you will implement two data science projects using Scikit-Learn, Scipy, and other libraries with Python GUI. In Project 1, you will learn how to use Scikit-Learn, NumPy, Pandas, Seaborn, and other libraries to perform how to predict early stage diabetes using Early Stage Diabetes Risk Prediction Dataset provided by Kaggle (https://www.kaggle.com/ishandutta/early-stage-diabetes-risk-prediction-dataset/download). This dataset contains the sign and symptpom data of newly diabetic or would be diabetic patient. This has been collected using direct questionnaires from the patients of Sylhet Diabetes Hospital in Sylhet, Bangladesh and approved by a doctor. You will develop a GUI using PyQt5 to plot distribution of features, feature importance, cross validation score, and prediced values versus true values. The machine learning models used in this project are Adaboost, Random Forest, Gradient Boosting, Logistic Regression, and Support Vector Machine. In Project 2, you will learn how to use Scikit-Learn, NumPy, Pandas, and other libraries to perform how to analyze and predict breast cancer using Breast Cancer Prediction Dataset provided by Kaggle (https://www.kaggle.com/merishnasuwal/breast-cancer-prediction-dataset/download). Worldwide, breast cancer is the most common type of cancer in women and the second highest in terms of mortality rates.Diagnosis of breast cancer is performed when an abnormal lump is found (from self-examination or x-ray) or a tiny speck of calcium is seen (on an x-ray). After a suspicious lump is found, the doctor will conduct a diagnosis to determine whether it is cancerous and, if so, whether it has spread to other parts of the body. This breast cancer dataset was obtained from the University of Wisconsin Hospitals, Madison from Dr. William H. Wolberg. You will develop a GUI using PyQt5 to plot distribution of features, pairwise relationship, test scores, prediced values versus true values, confusion matrix, and decision boundary. The machine learning models used in this project are K-Nearest Neighbor, Random Forest, Naive Bayes, Logistic Regression, Decision Tree, and Support Vector Machine. WORKSHOP 8: In this workshop, you will learn how to use Scikit-Learn, TensorFlow, Keras, NumPy, Pandas, Seaborn, and other libraries to implement brain tumor classification and detection with machine learning using Brain Tumor dataset provided by Kaggle. This dataset contains five first order features: Mean (the contribution of individual pixel intensity for the entire image), Variance (used to find how each pixel varies from the neighboring pixel 0, Standard Deviation (the deviation of measured Values or the data from its mean), Skewness (measures of symmetry), and Kurtosis (describes the peak of e.g. a frequency distribution). It also contains eight second order features: Contrast, Energy, ASM (Angular second moment), Entropy, Homogeneity, Dissimilarity, Correlation, and Coarseness. The machine learning models used in this project are K-Nearest Neighbor, Random Forest, Naive Bayes, Logistic Regression, Decision Tree, and Support Vector Machine. The deep learning models used in this project are MobileNet and ResNet50. In this project, you will develop a GUI using PyQt5 to plot boundary decision, ROC, distribution of features, feature importance, cross validation score, and predicted values versus true values, confusion matrix, training loss, and training accuracy. WORKSHOP 9: In this workshop, you will learn how to use Scikit-Learn, Keras, TensorFlow, NumPy, Pandas, Seaborn, and other libraries to perform COVID-19 Epitope Prediction using COVID-19/SARS B-cell Epitope Prediction dataset provided in Kaggle. All of three datasets consists of information of protein and peptide: parent_protein_id : parent protein ID; protein_seq : parent protein sequence; start_position : start position of peptide; end_position : end position of peptide; peptide_seq : peptide sequence; chou_fasman : peptide feature; emini : peptide feature, relative surface accessibility; kolaskar_tongaonkar : peptide feature, antigenicity; parker : peptide feature, hydrophobicity; isoelectric_point : protein feature; aromacity: protein feature; hydrophobicity : protein feature; stability : protein feature; and target : antibody valence (target value). The machine learning models used in this project are K-Nearest Neighbor, Random Forest, Naive Bayes, Logistic Regression, Decision Tree, Support Vector Machine, Adaboost, Gradient Boosting, XGB classifier, and MLP classifier. Then, you will learn how to use sequential CNN and VGG16 models to detect and predict Covid-19 X-RAY using COVID-19 Xray Dataset (Train & Test Sets) provided in Kaggle. The folder itself consists of two subfolders: test and train. Finally, you will develop a GUI using PyQt5 to plot boundary decision, ROC, distribution of features, feature importance, cross validation score, and predicted values versus true values, confusion matrix, training loss, and training accuracy. WORKSHOP 10: In this workshop, you will learn how to use Scikit-Learn, Keras, TensorFlow, NumPy, Pandas, Seaborn, and other libraries to perform analyzing and predicting stroke using dataset provided in Kaggle. The dataset consists of attribute information: id: unique identifier; gender: "Male", "Female" or "Other"; age: age of the patient; hypertension: 0 if the patient doesn't have hypertension, 1 if the patient has hypertension; heart_disease: 0 if the patient doesn't have any heart diseases, 1 if the patient has a heart disease; ever_married: "No" or "Yes"; work_type: "children", "Govt_jov", "Never_worked", "Private" or "Self-employed"; Residence_type: "Rural" or "Urban"; avg_glucose_level: average glucose level in blood; bmi: body mass index; smoking_status: "formerly smoked", "never smoked", "smokes" or "Unknown"; and stroke: 1 if the patient had a stroke or 0 if not. The models used in this project are K-Nearest Neighbor, Random Forest, Naive Bayes, Logistic Regression, Decision Tree, Support Vector Machine, Adaboost, LGBM classifier, Gradient Boosting, XGB classifier, MLP classifier, and CNN 1D. Finally, you will develop a GUI using PyQt5 to plot boundary decision, ROC, distribution of features, feature importance, cross validation score, and predicted values versus true values, confusion matrix, learning curve, performace of the model, scalability of the model, training loss, and training accuracy. WORKSHOP 11: In this workshop, you will learn how to use Scikit-Learn, Keras, TensorFlow, NumPy, Pandas, Seaborn, and other libraries to perform classifying and predicting Hepatitis C using dataset provided by UCI Machine Learning Repository. All attributes in dataset except Category and Sex are numerical. Attributes 1 to 4 refer to the data of the patient: X (Patient ID/No.), Category (diagnosis) (values: '0=Blood Donor', '0s=suspect Blood Donor', '1=Hepatitis', '2=Fibrosis', '3=Cirrhosis'), Age (in years), Sex (f,m), ALB, ALP, ALT, AST, BIL, CHE, CHOL, CREA, GGT, and PROT. The target attribute for classification is Category (2): blood donors vs. Hepatitis C patients (including its progress ('just' Hepatitis C, Fibrosis, Cirrhosis). The models used in this project are K-Nearest Neighbor, Random Forest, Naive Bayes, Logistic Regression, Decision Tree, Support Vector Machine, Adaboost, LGBM classifier, Gradient Boosting, XGB classifier, MLP classifier, and ANN 1D. Finally, you will develop a GUI using PyQt5 to plot boundary decision, ROC, distribution of features, feature importance, cross validation score, and predicted values versus true values, confusion matrix, learning curve, performace of the model, scalability of the model, training loss, and training accuracy.