Results
Evaluation Results:
- 32 Artifact Available
- 25 Artifact Functional
- 12 Results Reproduced
| Title | Available | Functional | Reproduced | Available at |
|---|---|---|---|---|
| Aceso: Efficient Parallel DNN Training through Iterative Bottleneck Alleviation |
|
|
Artifact |
|
| Adaptable Runtime Monitoring for Intermittent Systems |
|
Artifact |
||
| Atlas: Hybrid Cloud Migration Advisor for Interactive Microservices |
|
Artifact |
||
| CDMPP: A Device-Model Agnostic Framework for Latency Prediction of Tensor Programs |
|
Artifact |
||
| Characterization and Reclamation of Frozen Garbage in Managed FaaS Workloads |
|
|
Artifact |
|
| Concealing Compression-accelerated I/O for HPC Applications through In Situ Task Scheduling |
|
|
|
Artifact |
| CSAL: the Next-Gen Local Disks for the Cloud |
|
|
Artifact |
|
| Dordis: Efficient Federated Learning with Dropout-Resilient Differential Privacy |
|
Artifact |
||
| Draconis: Network-Accelerated Scheduling for Micro-Scale Workloads |
|
Artifact |
||
| DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines |
|
|
|
Artifact |
| Effective Bug Detection with Unused Definitions |
|
|
Artifact |
|
| Efficient Auditing of Event-driven Web Applications |
|
|
|
Artifact |
| Enoki: High Velocity Linux Kernel Scheduler Development |
|
|
Artifact |
|
| FLOAT: Federated Learning Optimizations with Automated Tuning |
|
|
Artifact |
|
| FLOWS: Balanced MRC Profiling for Heterogeneous Object-Size Cache |
|
|
Artifact |
|
| GMorph: Accelerating Multi-DNN Inference via Model Fusion |
|
|
Artifact |
|
| HAP: SPMD DNN Training on Heterogeneous GPU Clusters with Automated Program Synthesis |
|
|
Artifact |
|
| HD-IOV: SW-HW Co-designed I/O Virtualization with Scalability and Flexibility for Hyper-Density Cloud |
|
Artifact |
||
| Improving GPU Energy Efficiency through an Application-transparent Frequency Scaling Policy with Performance Assurance |
|
|
Artifact |
|
| Minuet: Accelerating 3D Sparse Convolutions on GPUs |
|
|
|
Artifact |
| Model Selection for Latency-Critical Inference Serving |
|
|
Artifact |
|
| On the Limitations of Carbon-Aware Temporal and Spatial Workload Shifting in the Cloud |
|
|
|
Artifact |
| Orion: Interference-aware, Fine-grained GPU Sharing for ML Applications |
|
|
|
Artifact |
| Pronghorn: Effective Checkpoint Orchestration for Serverless Hot-Starts |
|
|
Artifact |
|
| Puddles: Application-Independent Recovery and Location-Independent Data for Persistent Memory |
|
|
|
Artifact |
| Serialization/Deserialization-free State Transfer in Serverless Workflows with RDMA-based Remote Memory Map |
|
Artifact |
||
| SplitFT: Fault Tolerance for Disaggregated Datacenters via Remote Memory Logging |
|
|
|
Artifact |
| TraceUpscaler: Upscaling Traces to Evaluate Systems at High Load |
|
|
Artifact |
|
| Transparent Multicore Scaling of Single-Threaded Network Functions |
|
|
|
Artifact |
| TTLs Matter: Efficient Cache Sizing with TTL-Aware Miss Ratio Curves and Working Set Sizes |
|
|
|
Artifact |
| Unison: A Parallel-Efficient and User-Transparent Network Simulation Kernel |
|
|
|
Artifact |
| WiseGraph: Optimizing GNN with Joint Workload Partition of Graph and Operations |
|
|
|
Artifact |