59 results for “topic:aws-quicksight”
Guidance for Clickstream Analytics on AWS source code
Bring your own data Labs: Build a serverless data pipeline based on your own data
Voice of the Customer (VoC) to enhance customer experience with serverless architecture and sentiment analysis, using Amazon Kinesis, Amazon Athena, Amazon QuickSight, Amazon Comprehend, and ChatGPT-LLMs for sentiment analysis.
This repository includes some AWS Cloud Quest. it not include the cloud practitioner labs
DevOps에 대한 개념 이해와 AWS 개발자 도구를 활용한 실습 및 연구
Build a Visualization and Monitoring Dashboard for IoT Data with Amazon Kinesis Analytics and Amazon QuickSight
Build machine learning-powered business intelligence analyses using Amazon QuickSight
A simple, practical, and affordable system for measuring head trauma within the sports environment, subject to the absence of trained medical personnel made using Amazon Kinesis Data Streams, Kinesis Data Analytics, Kinesis Data Firehose, and AWS Lambda
AWS Programming and Tools meetup workshop
you run a script to mimic multiple sensors publishing messages on an IoT MQTT topic, with one message published every second. The events get sent to AWS IoT, where an IoT rule is configured. The IoT rule captures all messages and sends them to Firehose. From there, Firehose writes the messages in batches to objects stored in S3. In S3, you set up a table in Athena and use QuickSight to analyze the IoT data.
No description provided.
aws-quicksight-tool assists in the use of the AWS QuickSight CLI.
This project integrates real-time data processing and analytics using Apache NiFi, Kafka, Spark, Hive, and AWS services for comprehensive COVID-19 data insights.
Smart City Realtime Data Engineering Project
Scrapped tweets using twitter API (for keyword ‘Netflix’) on an AWS EC2 instance, ingested data into S3 via kinesis firehose. Used Spark ML on databricks to build a pipeline for sentiment classification model and Athena & QuickSight to build a dashboard
Convert DMARC reports to TSV (or CSV) format for easier analysis and visualisation
This project repo 📺 offers a robust solution meticulously crafted to efficiently manage, process, and analyze YouTube video data leveraging the power of AWS services. Whether you're diving into structured statistics or exploring the nuances of trending key metrics, this pipeline is engineered to handle it all with finesse.
A data pipeline to ingest, process, store storm events datasets so we can access them through different means.
A demand forecasting pipeline deployed on Azure and AWS
The testbed showing how to embed QuickSight dashboards into a web app
US Insurance cost predicting linear regression model. Mainly used to learn about Machine Learning tools in Amazon Web Services (AWS)
Data lake demo using change data capture (CDC) on AWS
This project demonstrates a complete data pipeline for extracting, transforming, and loading (ETL) Reddit data into an Amazon Redshift data warehouse. The pipeline uses various AWS services and tools including Apache Airflow, PostgreSQL, AWS S3, AWS Glue, AWS Athena, and Amazon Redshift. The project is orchestrated using Docker and Apache Airflow
This project is based for legacy applications that works with positional files to process data. The objetive is read these positional files when they arrives in AWS S3, and then send to a dataware-house like AWS Redshift, and finally read the results with a Business Intelligence tool as AWS QuickSight.
A data engineering portfolio project using AWS cloud services to analyze correlations between Malaysian retail performance and fuel prices. Features Terraform IaC, ETL/ELT with AWS S3, Glue, SQL analytics via Athena coupled with data transformation via dbt, and workflow orchestration with Kestra.
RiftGuru uses hybrid rule-based and ML-augmented detection to identify clutch moments in League of Legends matches. <------------------------------------------>Built for the AWS AI Hackathon 2025 | Processes Riot API data through a serverless pipeline to generate AI-powered narratives of your best plays.
This project is an serverless automated CSV data pipeline provisioned by Terraform. This pipeline handles the flow from raw data ingestion to final visualization.
Unveiling job market trends with Scrapy and AWS
Put-away is one of the most crucial process in supply chain. If we misplace the goods, all of the rest process could be potentially delayed. That's why we choose this process to be improved by multiclassification machine learning model and dashboarding with AWS.
A data engineering portfolio project using AWS cloud services to analyze correlations between Malaysian retail performance and fuel prices. Features Terraform IaC, ETL/ELT with AWS S3, Glue, SQL analytics via Athena coupled with data transformation via dbt, and workflow orchestration with Kestra.