I am Buddhika Patalee, a data enthusiast and quantitative researcher with expertise in machine learning, predictive analytics, and econometric modeling. With a strong foundation in data science, I specialize in leveraging advanced statistical techniques and machine learning models to extract insights from complex datasets.
Currently, I am a research scholar at the University of Kentucky, where I develop predictive models, machine learning algorithms, and econometric frameworks to analyze large-scale datasets. My work includes data-driven decision-making, policy evaluation, and business intelligence solutions.
My goal is to bridge applied research and real-world applications, helping stakeholders make data-driven decisions using advanced analytics techniques.
This website is my space to share insights, tools, and reflections on the evolving world of data analytic. From tensor decomposition and econometrics to machine learning pipelines,I explore topics that lie at the intersection of theory and real-world application. Whether you’re a fellow data scientist, or curious reader, I hope you find value in the methods and ideas shared here.
AI & Machine Learning Applications
My research integrates applied AI and machine learning techniques to generate consumer insights, model behavior, and support data-driven decision-making. I focus on interpretable, actionable models rooted in economic and behavioral science contexts.
Key Areas of Experience:
Neural Networks & Predictive Modeling: Applied shallow neural networks for time series forecasting and behavioral prediction using tools such as
scikit-learn
and R.Consumer Insight Modeling: Used machine learning to model consumer preferences and purchase behavior from panel data.
Survey-Based ML Pipelines: Integrated ML models with survey data to assess decision-making patterns and risk perceptions.
Feature Engineering & Data Cleaning: Built high-quality datasets for supervised learning tasks in Python and R.
Model Evaluation: Applied classification metrics (accuracy, recall, F1), cross-validation, and interpretability tools (feature importance, partial dependence).
Tools & Libraries:
Python (scikit-learn
, pandas
, matplotlib
, seaborn
), R, SQL, STATA
Understanding the Hidden Structure of High-Dimensional Data with Tensors
As data grows in complexity, such as images, videos, EEG signals, or multi-sensor recordings, traditional tools like matrices fall short. Tensor decomposition offers a powerful way to uncover low-dimensional structures hidden within high-dimensional datasets. In this post, I introduce two popular tensor decomposition methods- Tucker and CP decomposition along with Python examples. Whether you’re working with biomedical images or recommender systems, these techniques help compress data, extract features, and improve learning efficiency.
Explore how you can use tensor tools to make sense of complex data landscapes.