Jiawen Liu
Work Experience
Huawei Technologies Co., Ltd.
- Role: Lead Engineer / Architect (Level 18)
Duration: June 2023 – present (3 years)
- Project 1 – DataAgent (Trustworthy AI Data Analysis Platform)
- Responsibilities: Overall system design, key technical breakthrough (POC) verification, project management, and team leadership.
- Key contributions: Built a code-based modeling platform that mitigates LLM hallucinations via formalizable data pipelines; enabled reliable AI‑driven data analytics for enterprise scenarios.
- Outcomes: 18 filed/granted patents (2023–2026) covering hallucination suppression, adaptive noise calibration, and verifiable data analysis.
- Project 2 – OpenFuyao (Ascend NPU Ecosystem)
- Role: Co-lead of inference acceleration toolkit for LLMs on NPU.
- Contributions: Designed and optimized NPU‑aware operator libraries and memory scheduling; integrated the toolkit with mainstream LLM frameworks; achieved significant latency reduction for large‑scale deployment.
- Impact: Core contributor to Huawei’s Ascend computing ecosystem, enabling efficient LLM inference on domestic AI accelerators.
Academic Activity
Publications
Patents
- 18 filed/published patents (as of 2026) in intelligent data analysis, LLM hallucination mitigation, code-based modeling, and NPU acceleration.
Drafts
Talks/Posters
- Short talk -- Type System in Adaptive Data Analysis -- EGLPLS 2019 (in Cornell University in Ithaca, NY, USA)[PDF]
- Poster session -- Tailoring Differentially Private Bayesian Inference to Distance Between Distributions -- TPDP of CCS 2018 (in Toronto, Canada)[PDF]
Events
- POPL Jan. 15-21, 2023 (in Boston, Massachusetts, United States)
- New England Programming Languages and Systems Symposium (NEPLS) Sep. 29, 2022 (in Harvard, Boston, Massachusetts, United States)
- POPL Jan. 17-23, 2022 (in Philadelphia, Pennsylvania, United States)
- PLWM@POPL Jan.18, 2021 (in Philadelphia, Pennsylvania, United States)
- POPL Jan. 16-21, 2021 (online)
- New England Systems Verification Day Oct.18, 2019 (in MIT, MA, USA)
- POPL Jan. 13-19, 2019 (in Cascais, Portugal)
- PLWM@POPL Jan.14, 2019 (in Cascais, Portugal)
- OPLSS-- Foundations of Probabilistic Programming and Security -- Jun 17 - 29, 2019 (in University of Oregon, OR, USA)
Teaching
- Teaching Assistant for CS 320, Concepts of Programming Languages, 2019 Fall - Boston Univeristy course page
- Teaching Assistant for CSE 305, Introduction to Programming Languages, 2019 Spring - Univeristy at Buffalo course page
- Teaching Assistant for CSE 305, Introduction to Programming Languages, 2018 Fall - Univeristy at Buffalo course page
- Teaching Assistant for CSE 305, Introduction to Programming Languages, 2018 Spring - Univeristy at Buffalo course page
- Teaching Assistant for CSE 542, Software Engineer Concept, 2017 Fall - Univeristy at Buffalo course page
Curriculum Vitae
- June 2023 – present, Huawei Technologies Co., Ltd. (Level 18) – Lead Engineer/Architect (DataAgent & OpenFuyao).
- September 2019 – May 2023, Ph.D. in Computer Science, Boston University. Advisor: Marco Gaboardi. Thesis on program analysis for adaptive data analysis (PLDI 2023).
- September 2017 – May 2019, Ph.D. student in Computer Science and Engineering, University at Buffalo, SUNY. Advisor: Marco Gaboardi.
- September 2016 – June 2017, Intern, Institute of Information Engineering, Chinese Academy of Sciences.
- September 2013 – June 2017, B.A. in Information Science, Central University of Finance and Economics, Beijing.
- Born September 7, 1997 in China.
Links
Contacts
Research Interest
- Differential Privacy
- Programming Language and Type Systems
- Formal Verifications
- Trustworthy AI & LLM Hallucination Mitigation
I am now working on two topics:
- A programming language for adaptive data analysis based on probabilistic programs, using type information to guarantee confidence intervals on outputs. Github
- An automatic formal verification tool for differentially private algorithms implemented in floating point computation. Github
Previous topics:
- An improved mechanism for Bayesian inference, calibrating noise to the sensitivity of a metric over distributions.Github