Janki Bhimani, PhD
Janki Bhimani (email@example.com) is PhD candidate in the department of Electrical and Computer Engineering at Northeastern University, Boston. Her current research focuses on system performance engineering, that contains three main streams: Scheduling, capacity planning and resource management for big data workloads on cloud; Performance modeling for data processing; and Enhancing the software stack of modern storage devices. She received her M.S. (in 2014) from Northeastern University in Computer Engineering. She received her B.Tech. (in 2013) from Gitam University, India in Electrical and Electronics Engineering. She is passionate to explore emerging technologies to yield better system performance.
- Database Workload Characterization for Containerized Application Traffic
- I/O Modeling, Multi-stream NVMe SSDs and stream assignment algorithms
- Performance Prediction, Modeling, Simulation and Evaluation of Distributed Platform
- Application Bottleneck Domain Analysis (Storage, Communication, Calculation) and Acceleration
- Parallel Computing, MPI Scheduling and Heterogeneous Computing
- Capacity Planning and Resource Management
Efficient System for Identifying Data Temperature for Stream Identification in Multi-Stream SSD (On Going)
Innovate a new data structure based upon bloom filters for efficient data temperature categorization which can be used to identify streamIDs while writting data into multi-stream SSDs. It is a memory efficient technique designed by keeping in mind the limited resources available within an SSD device.
PatIO: Pattern I/O Generator (Under Review)
PatIO is an orthogonal approach to advancing a naive synthetic I/O engine and to producing I/Os that represent real-world workloads. Our methodology is based on a three-step process: dissect, construct and integrate. We first study I/O activities of real application workloads from storage point of view. We dissect the overall I/O activities of various real workloads into distinct I/O patterns. Then, we construct a pattern warehouse as the collection of all patterns. Each pattern is framed by a unique combination of various I/O jobs that can be generated by an I/O generating engine (e.g., FIO, a popular I/O engine) with the input of different features. Finally, different combinations of these synthetically generated I/O patterns are capable to reproduce the characteristics of various real workloads. We would like to emphasis that our method is lightweight as it neither demands a large amount of storage resources to store traces or information of chunk characteristics, nor requires tedious and time-consuming installation, configuration and load phase of database before running. Furthermore, PatIO is scalable to generate I/O workloads over different storage sizes.
Comprehensive Design Guidelines and Scheduler for Mapping Workloads to Modern Storage Platform
Design and develop a Docker Workload Controller to decide the optimal initialization and operation of containerized docker workloads running on multiple NVMe SSDs. Our controller decides the optimal batches of simultaneously operating containers in order to minimize total execution time and maximize resource utilization. Meanwhile, our controller also strives to balance the throughput among all simultaneously running applications. We develop this new docker controller by solving an optimization problem using five different optimization solvers.
FIOS: Feature based I/O Stream-ID assignment for Multi-Stream SSDs (On Going)
Leverage multi-stream SSD firmware by inventing smart stream ID assignment algorithm for muti-stream SSDs to provide better endurance of flash devices and enhance the lifetime of SSDs. Develop an algorithm which may reduce WAF and can be adapted easity to any appplication as well as simultaneous multiple applications.
I/O Intensive Containerized Applications on Flash
Performance characterization to enable best performance and fairness for I/O intensive dockerized applications running on NVMe SSDs by implementing and exploring homogeneous and heterogeneous database workload container setup.