<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>MODE Collaboration</title><link>https://mode-demo.github.io/</link><atom:link href="https://mode-demo.github.io/index.xml" rel="self" type="application/rss+xml"/><description>MODE Collaboration</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Thu, 26 Feb 2026 00:00:00 +0000</lastBuildDate><item><title>Sixth MODE Workshop on Differentiable Programming for Experiment Design</title><link>https://mode-demo.github.io/events/sixth_workshop/</link><pubDate>Tue, 01 Sep 2026 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/events/sixth_workshop/</guid><description>&lt;!--
&lt;div class="alert alert-note"&gt;
&lt;div&gt;
Click on the &lt;strong&gt;Slides&lt;/strong&gt; button above to view the built-in slides feature.
&lt;/div&gt;
&lt;/div&gt;
Slides can be added in a few ways:
- **Create** slides using Hugo Blox Builder's [_Slides_](https://docs.hugoblox.com/reference/content-types/) feature and link using `slides` parameter in the front matter of the talk file
- **Upload** an existing slide deck to `static/` and link using `url_slides` parameter in the front matter of the talk file
- **Embed** your slides (e.g. Google Slides) or presentation video on this page using [shortcodes](https://docs.hugoblox.com/reference/markdown/).
Further event details, including [page elements](https://docs.hugoblox.com/reference/markdown/) such as image galleries, can be added to the body of this page. --&gt;</description></item><item><title>Sixth MODE Workshop Announced for June 2025</title><link>https://mode-demo.github.io/post/sixth-mode-workshop-2026/</link><pubDate>Thu, 07 May 2026 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/post/sixth-mode-workshop-2026/</guid><description>&lt;p&gt;The Sixth MODE Workshop on Differentiable Programming for experiment design will take place at OAC (Kolymbari, Crete) on &lt;strong&gt;Sept 1–7, 2026&lt;/strong&gt;. Mark the date! Registration and abstract submission are open &lt;a href="https://indico.cern.ch/event/1655754/" target="_blank" rel="noopener"&gt;https://indico.cern.ch/event/1655754/&lt;/a&gt;
.&lt;/p&gt;</description></item><item><title>Design of an Imaging Air Cherenkov Telescope array layout with differential programming</title><link>https://mode-demo.github.io/publication/alispach-2026-qn/</link><pubDate>Thu, 01 Jan 2026 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/alispach-2026-qn/</guid><description/></item><item><title>Differentiable Surrogate for Detector Simulation and Design with Diffusion Models</title><link>https://mode-demo.github.io/publication/nguyen-2026-wsv/</link><pubDate>Thu, 01 Jan 2026 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/nguyen-2026-wsv/</guid><description/></item><item><title>Differentiating a HEP Analysis Pipeline within the Scikit-HEP Software Ecosystem</title><link>https://mode-demo.github.io/publication/aly-2026-i-n/</link><pubDate>Thu, 01 Jan 2026 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/aly-2026-i-n/</guid><description/></item><item><title>Imaging Techniques in Muon Scattering Tomography</title><link>https://mode-demo.github.io/publication/borozdin-2026-rp/</link><pubDate>Thu, 01 Jan 2026 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/borozdin-2026-rp/</guid><description/></item><item><title>Partial Observability and Domain Randomization in RL-Based Strategy for Optical Cavity Locking Optimization</title><link>https://mode-demo.github.io/publication/svizzeretto-2026-bx/</link><pubDate>Thu, 01 Jan 2026 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/svizzeretto-2026-bx/</guid><description/></item><item><title>Several New Members Joined the MODE Collaboration</title><link>https://mode-demo.github.io/post/new-members-dec-2025/</link><pubDate>Wed, 10 Dec 2025 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/post/new-members-dec-2025/</guid><description>&lt;p&gt;Several new members joined the MODE Collaboration:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Prof. &lt;strong&gt;Claudio Kopper&lt;/strong&gt; (FAU Erlangen–Nürnberg)&lt;/li&gt;
&lt;li&gt;Dr. &lt;strong&gt;Oliver Janik&lt;/strong&gt; (FAU Erlangen–Nürnberg)&lt;/li&gt;
&lt;li&gt;Prof. &lt;strong&gt;Francesco Ferranti&lt;/strong&gt; (LTU)&lt;/li&gt;
&lt;li&gt;Dr. &lt;strong&gt;Vassil Vassilev&lt;/strong&gt; (Princeton)&lt;/li&gt;
&lt;li&gt;Prof. &lt;strong&gt;Xuemei Gu&lt;/strong&gt; (Jena University)&lt;/li&gt;
&lt;li&gt;Dr. &lt;strong&gt;Sarah Barnes&lt;/strong&gt; (DLR)&lt;/li&gt;
&lt;li&gt;Dr. &lt;strong&gt;Jean-Marco Alameddine&lt;/strong&gt; (DLR)&lt;/li&gt;
&lt;li&gt;Dr. &lt;strong&gt;Ángel Bueno&lt;/strong&gt; (DLR)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Felix Sattler&lt;/strong&gt; (DLR)&lt;/li&gt;
&lt;li&gt;Dr. &lt;strong&gt;Hamza Hanif&lt;/strong&gt; (Simon Fraser University)&lt;/li&gt;
&lt;li&gt;Dr. &lt;strong&gt;Zlatan Dimitrov&lt;/strong&gt; (Adastra and GATE Institute)&lt;/li&gt;
&lt;li&gt;Dr. &lt;strong&gt;Florian Bury&lt;/strong&gt; (University of Bristol)&lt;/li&gt;
&lt;li&gt;Prof. &lt;strong&gt;Carlo Mancini-Terracciano&lt;/strong&gt; (Sapienza University of Rome)&lt;/li&gt;
&lt;li&gt;Dr. &lt;strong&gt;Lorenzo Arsini&lt;/strong&gt; (Sapienza University of Rome)&lt;/li&gt;
&lt;li&gt;Prof. &lt;strong&gt;Andrea Santamaría García&lt;/strong&gt; (University of Liverpool)&lt;/li&gt;
&lt;/ul&gt;</description></item><item><title>Progress in end-to-end optimization of fundamental physics experimental apparata with differentiable programming</title><link>https://mode-demo.github.io/publication/aehle-2025/</link><pubDate>Mon, 01 Dec 2025 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/aehle-2025/</guid><description/></item><item><title>Fifth MODE Workshop on Differentiable Programming for Experiment Design</title><link>https://mode-demo.github.io/events/fifth_workshop/</link><pubDate>Mon, 09 Jun 2025 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/events/fifth_workshop/</guid><description>&lt;!--
&lt;div class="alert alert-note"&gt;
&lt;div&gt;
Click on the &lt;strong&gt;Slides&lt;/strong&gt; button above to view the built-in slides feature.
&lt;/div&gt;
&lt;/div&gt;
Slides can be added in a few ways:
- **Create** slides using Hugo Blox Builder's [_Slides_](https://docs.hugoblox.com/reference/content-types/) feature and link using `slides` parameter in the front matter of the talk file
- **Upload** an existing slide deck to `static/` and link using `url_slides` parameter in the front matter of the talk file
- **Embed** your slides (e.g. Google Slides) or presentation video on this page using [shortcodes](https://docs.hugoblox.com/reference/markdown/).
Further event details, including [page elements](https://docs.hugoblox.com/reference/markdown/) such as image galleries, can be added to the body of this page. --&gt;</description></item><item><title>Hadron Identification Prospects with Granular Calorimeters</title><link>https://mode-demo.github.io/publication/de-vita-2025/</link><pubDate>Thu, 01 May 2025 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/de-vita-2025/</guid><description/></item><item><title>On the utility function of experiments in fundamental science</title><link>https://mode-demo.github.io/publication/dorigo-2025/</link><pubDate>Thu, 01 May 2025 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/dorigo-2025/</guid><description/></item><item><title>Fifth MODE Workshop on Differentiable Programming</title><link>https://mode-demo.github.io/post/fifth-mode-workshop-2025/</link><pubDate>Sat, 01 Mar 2025 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/post/fifth-mode-workshop-2025/</guid><description>&lt;p&gt;The Fifth MODE Workshop on Differentiable Programming for experiment design took place at OAC (Kolymbari, Crete).&lt;/p&gt;
&lt;p&gt;Registration and abstract submission were open on the &lt;a href="https://indico.cern.ch/event/1481852/" target="_blank" rel="noopener"&gt;Indico page&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Keynote talks by Danilo Rezende (DeepMind), Andrea Walther (HU Berlin), and Riccardo Zecchina (Bocconi).&lt;/p&gt;</description></item><item><title>People</title><link>https://mode-demo.github.io/people/</link><pubDate>Wed, 12 Feb 2025 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/people/</guid><description/></item><item><title>A Multiple Readout Ultra-High Segmentation Detector Concept For Future Colliders</title><link>https://mode-demo.github.io/publication/bilki-2025-f-9/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/bilki-2025-f-9/</guid><description/></item><item><title>AI-assisted design of experiments at the frontiers of computation: methods and new perspectives</title><link>https://mode-demo.github.io/publication/vischia-2025/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/vischia-2025/</guid><description/></item><item><title>Automatic Optimization of a Parallel-Plate Avalanche Counter with Optical Readout</title><link>https://mode-demo.github.io/publication/particles-8010026/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/particles-8010026/</guid><description/></item><item><title>Bias Reduction Using Expectation Maximization in the Optimization of an AI-Assisted Muon Tomography System</title><link>https://mode-demo.github.io/publication/dela-puente-santos-20252-q/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/dela-puente-santos-20252-q/</guid><description/></item><item><title>Bringing Automatic Differentiation to CUDA with Compiler-Based Source Transformations</title><link>https://mode-demo.github.io/publication/koutsou-2025-qv/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/koutsou-2025-qv/</guid><description/></item><item><title>Design optimization of hadronic calorimeters for future colliders</title><link>https://mode-demo.github.io/publication/de-matos-rodrigues-2025-z/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/de-matos-rodrigues-2025-z/</guid><description/></item><item><title>Development and Explainability of Models for Machine-Learning-Based Reconstruction of Signals in Particle Detectors</title><link>https://mode-demo.github.io/publication/particles-8020048/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/particles-8020048/</guid><description/></item><item><title>Differentiable Deep Learning Surrogate Models Applied to the Optimization of the IFMIF-DONES Facility</title><link>https://mode-demo.github.io/publication/particles-8010021/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/particles-8010021/</guid><description/></item><item><title>End-to-End Detector Optimization with Diffusion models: A Case Study in Sampling Calorimeters</title><link>https://mode-demo.github.io/publication/schmidt-2025-endtoenddetectoroptimizationdiffusion/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/schmidt-2025-endtoenddetectoroptimizationdiffusion/</guid><description/></item><item><title>End-to-End Detector Optimization with Diffusion Models: A Case Study in Sampling Calorimeters</title><link>https://mode-demo.github.io/publication/particles-8020047/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/particles-8020047/</guid><description/></item><item><title>From Light to Muons: Towards a Unified Framework for Physics-based 3D Scene Reconstruction</title><link>https://mode-demo.github.io/publication/sattler-2025-t-q/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/sattler-2025-t-q/</guid><description/></item><item><title>Gradient-descent-based reconstruction for muon tomography based on automatic differentiation in PyTorch</title><link>https://mode-demo.github.io/publication/alameddine-2025-qq/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/alameddine-2025-qq/</guid><description/></item><item><title>Hadron Identification Prospects with Granular Calorimeters</title><link>https://mode-demo.github.io/publication/particles-8020058/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/particles-8020058/</guid><description/></item><item><title>Information Field Theory for Two Applications in Astroparticle Physics</title><link>https://mode-demo.github.io/publication/particles-8020039/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/particles-8020039/</guid><description/></item><item><title>Machine Learning Approach to Shield Optimization at Muon Collider</title><link>https://mode-demo.github.io/publication/particles-8010025/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/particles-8010025/</guid><description/></item><item><title>Muographic Image Upsampling with Machine Learning for Built Infrastructure Applications</title><link>https://mode-demo.github.io/publication/particles-8010033/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/particles-8010033/</guid><description/></item><item><title>Neuromorphic Readout for Hadron Calorimeters</title><link>https://mode-demo.github.io/publication/particles-8020052/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/particles-8020052/</guid><description/></item><item><title>Optimisation of Muon Tomography Scanners for Border Control Using TomOpt</title><link>https://mode-demo.github.io/publication/particles-8020053/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/particles-8020053/</guid><description/></item><item><title>Optimization pipeline for in-ice radio neutrino detectors</title><link>https://mode-demo.github.io/publication/ravn-2025-an/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/ravn-2025-an/</guid><description/></item><item><title>Porting MADGRAPH to FPGA Using High-Level Synthesis (HLS)</title><link>https://mode-demo.github.io/publication/particles-8030063/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/particles-8030063/</guid><description/></item><item><title>Production Optimization of Exotic Hypernuclei via Heavy-Ion Beams at GSI-FAIR</title><link>https://mode-demo.github.io/publication/particles-8020054/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/particles-8020054/</guid><description/></item><item><title>Scattering-Based Machine Learning Algorithms for Momentum Estimation in Muon Tomography</title><link>https://mode-demo.github.io/publication/particles-8020043/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/particles-8020043/</guid><description/></item><item><title>Unsupervised Particle Tracking with Neuromorphic Computing</title><link>https://mode-demo.github.io/publication/particles-8020040/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/particles-8020040/</guid><description/></item><item><title>Versal Adaptive Compute Acceleration Platform Processing for ATLAS-TileCal Signal Reconstruction</title><link>https://mode-demo.github.io/publication/particles-8020049/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/particles-8020049/</guid><description/></item><item><title>Fifth MODE Workshop Announced for June 2025</title><link>https://mode-demo.github.io/post/fifth-workshop-announcement-2024/</link><pubDate>Thu, 21 Nov 2024 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/post/fifth-workshop-announcement-2024/</guid><description>&lt;p&gt;The Fifth MODE Workshop on Differentiable Programming for experiment design will take place at OAC (Kolymbari, Crete) on &lt;strong&gt;June 9–13, 2025&lt;/strong&gt;. Mark the date! Registration and abstract submission will open soon.&lt;/p&gt;</description></item><item><title>New Coordinator and Steering Board Elected</title><link>https://mode-demo.github.io/post/new-coordinator-2024/</link><pubDate>Wed, 20 Nov 2024 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/post/new-coordinator-2024/</guid><description>&lt;p&gt;The MODE Collaboration has elected a new coordinator, Prof. &lt;strong&gt;Pietro Vischia&lt;/strong&gt; (UniOvi and ICTEA), and a new Steering Board:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Prof. &lt;strong&gt;Tommaso Dorigo&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Prof. &lt;strong&gt;Nicolas Gauger&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Dr. &lt;strong&gt;Andrea Giammanco&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Prof. &lt;strong&gt;Christian Glaser&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Prof. &lt;strong&gt;Lisa Kusch&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Dr. &lt;strong&gt;Fedor Ratnikov&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Prof. &lt;strong&gt;Pietro Vischia&lt;/strong&gt; (UniOvi and ICTEA)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Huge thanks to the previous coordinator (T. Dorigo) and Steering Board!&lt;/p&gt;</description></item><item><title>New Members Joined MODE in November 2024</title><link>https://mode-demo.github.io/post/new-members-nov-2024/</link><pubDate>Wed, 20 Nov 2024 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/post/new-members-nov-2024/</guid><description>&lt;p&gt;Several new members joined the MODE Collaboration:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Dr. &lt;strong&gt;Stephen M. Casey&lt;/strong&gt; (NASA)&lt;/li&gt;
&lt;li&gt;Dr. &lt;strong&gt;Christian Haack&lt;/strong&gt; (FAU Erlangen–Nürnberg)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Kalina Dimitrova&lt;/strong&gt; (Sofia University)&lt;/li&gt;
&lt;li&gt;Prof. &lt;strong&gt;Venelin Kozhuharov&lt;/strong&gt; (Sofia University)&lt;/li&gt;
&lt;li&gt;Prof. &lt;strong&gt;Peicho Petkov&lt;/strong&gt; (Sofia University)&lt;/li&gt;
&lt;/ul&gt;</description></item><item><title>Fourth MODE Workshop on Differentiable Programming for Experiment Design</title><link>https://mode-demo.github.io/events/fourth_workshop/</link><pubDate>Mon, 23 Sep 2024 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/events/fourth_workshop/</guid><description>&lt;!--
&lt;div class="alert alert-note"&gt;
&lt;div&gt;
Click on the &lt;strong&gt;Slides&lt;/strong&gt; button above to view the built-in slides feature.
&lt;/div&gt;
&lt;/div&gt;
Slides can be added in a few ways:
- **Create** slides using Hugo Blox Builder's [_Slides_](https://docs.hugoblox.com/reference/content-types/) feature and link using `slides` parameter in the front matter of the talk file
- **Upload** an existing slide deck to `static/` and link using `url_slides` parameter in the front matter of the talk file
- **Embed** your slides (e.g. Google Slides) or presentation video on this page using [shortcodes](https://docs.hugoblox.com/reference/markdown/).
Further event details, including [page elements](https://docs.hugoblox.com/reference/markdown/) such as image galleries, can be added to the body of this page. --&gt;</description></item><item><title>TomOpt: differential optimisation for task- and constraint-aware design of particle detectors in the context of muon tomography</title><link>https://mode-demo.github.io/publication/strong-2024/</link><pubDate>Mon, 01 Jul 2024 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/strong-2024/</guid><description/></item><item><title>Preprint: Optimization Using Pathwise Algorithmic Derivatives</title><link>https://mode-demo.github.io/post/pathwise-derivatives-preprint-2024/</link><pubDate>Mon, 13 May 2024 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/post/pathwise-derivatives-preprint-2024/</guid><description>&lt;p&gt;The preprint &lt;em&gt;Optimization Using Pathwise Algorithmic Derivatives of Electromagnetic Shower Simulations&lt;/em&gt;, led by several MODE members, is now online.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://arxiv.org/abs/2405.07944" target="_blank" rel="noopener"&gt;Read on arXiv&lt;/a&gt;&lt;/p&gt;</description></item><item><title>TomOpt Paper Accepted by ML: Science and Technology</title><link>https://mode-demo.github.io/post/tomopt-accepted-2024/</link><pubDate>Tue, 07 May 2024 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/post/tomopt-accepted-2024/</guid><description>&lt;p&gt;The paper &lt;em&gt;TomOpt: Differential optimisation for task- and constraint-aware design of particle detectors in the context of muon tomography&lt;/em&gt; has been accepted for publication by &lt;em&gt;Machine Learning: Science and Technology&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://arxiv.org/abs/2309.14027" target="_blank" rel="noopener"&gt;Read on arXiv&lt;/a&gt;&lt;/p&gt;</description></item><item><title>Fourth MODE Workshop at IFIC Valencia</title><link>https://mode-demo.github.io/post/fourth-mode-workshop-2024/</link><pubDate>Fri, 01 Mar 2024 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/post/fourth-mode-workshop-2024/</guid><description>&lt;p&gt;The Fourth MODE Workshop on Differentiable Programming for experiment design took place at IFIC Valencia.&lt;/p&gt;
&lt;p&gt;Registration and abstract submission were open on the &lt;a href="https://indico.cern.ch/event/1380163/" target="_blank" rel="noopener"&gt;Indico page&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Keynote talks by Danilo Rezende (DeepMind), Andrea Walther (HU Berlin), and Riccardo Zecchina (Bocconi).&lt;/p&gt;</description></item><item><title>Learn JavaScript</title><link>https://mode-demo.github.io/teaching/js/</link><pubDate>Tue, 24 Oct 2023 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/teaching/js/</guid><description>&lt;p&gt;&lt;a href="https://hugoblox.com" target="_blank" rel="noopener"&gt;Hugo Blox Builder&lt;/a&gt; is designed to give technical content creators a seamless experience. You can focus on the content and the Hugo Blox Builder which this template is built upon handles the rest.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Embed videos, podcasts, code, LaTeX math, and even test students!&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;On this page, you&amp;rsquo;ll find some examples of the types of technical content that can be rendered with Hugo Blox.&lt;/p&gt;
&lt;h2 id="video"&gt;Video&lt;/h2&gt;
&lt;p&gt;Teach your course by sharing videos with your students. Choose from one of the following approaches:&lt;/p&gt;
&lt;div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;"&gt;
&lt;iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share; fullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube.com/embed/D2vj0WcvH5c?autoplay=0&amp;amp;controls=1&amp;amp;end=0&amp;amp;loop=0&amp;amp;mute=0&amp;amp;start=0" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube video"&gt;&lt;/iframe&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Youtube&lt;/strong&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;{{&amp;lt; youtube w7Ft2ymGmfc &amp;gt;}}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Bilibili&lt;/strong&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;{{&amp;lt; bilibili id=&amp;quot;BV1WV4y1r7DF&amp;quot; &amp;gt;}}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Video file&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Videos may be added to a page by either placing them in your &lt;code&gt;assets/media/&lt;/code&gt; media library or in your &lt;a href="https://gohugo.io/content-management/page-bundles/" target="_blank" rel="noopener"&gt;page&amp;rsquo;s folder&lt;/a&gt;, and then embedding them with the &lt;em&gt;video&lt;/em&gt; shortcode:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;{{&amp;lt; video src=&amp;quot;my_video.mp4&amp;quot; controls=&amp;quot;yes&amp;quot; &amp;gt;}}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id="podcast"&gt;Podcast&lt;/h2&gt;
&lt;p&gt;You can add a podcast or music to a page by placing the MP3 file in the page&amp;rsquo;s folder or the media library folder and then embedding the audio on your page with the &lt;em&gt;audio&lt;/em&gt; shortcode:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;{{&amp;lt; audio src=&amp;quot;ambient-piano.mp3&amp;quot; &amp;gt;}}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Try it out:&lt;/p&gt;
&lt;audio controls &gt;
&lt;source src="https://mode-demo.github.io/teaching/js/ambient-piano.mp3" type="audio/mpeg"&gt;
&lt;/audio&gt;
&lt;h2 id="test-students"&gt;Test students&lt;/h2&gt;
&lt;p&gt;Provide a simple yet fun self-assessment by revealing the solutions to challenges with the &lt;code&gt;spoiler&lt;/code&gt; shortcode:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-markdown" data-lang="markdown"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;{{&amp;lt; spoiler text=&amp;#34;👉 Click to view the solution&amp;#34; &amp;gt;}}
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;You found me!
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;{{&amp;lt; /spoiler &amp;gt;}}
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;renders as&lt;/p&gt;
&lt;details class="spoiler " id="spoiler-2"&gt;
&lt;summary&gt;👉 Click to view the solution&lt;/summary&gt;
&lt;p&gt;You found me 🎉&lt;/p&gt;
&lt;/details&gt;
&lt;h2 id="math"&gt;Math&lt;/h2&gt;
&lt;p&gt;Hugo Blox Builder supports a Markdown extension for $\LaTeX$ math. You can enable this feature by toggling the &lt;code&gt;math&lt;/code&gt; option in your &lt;code&gt;config/_default/params.yaml&lt;/code&gt; file.&lt;/p&gt;
&lt;p&gt;To render &lt;em&gt;inline&lt;/em&gt; or &lt;em&gt;block&lt;/em&gt; math, wrap your LaTeX math with &lt;code&gt;{{&amp;lt; math &amp;gt;}}$...${{&amp;lt; /math &amp;gt;}}&lt;/code&gt; or &lt;code&gt;{{&amp;lt; math &amp;gt;}}$$...$${{&amp;lt; /math &amp;gt;}}&lt;/code&gt;, respectively.&lt;/p&gt;
&lt;div class="alert alert-note"&gt;
&lt;div&gt;
We wrap the LaTeX math in the Hugo Blox &lt;em&gt;math&lt;/em&gt; shortcode to prevent Hugo rendering our math as Markdown.
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Example &lt;strong&gt;math block&lt;/strong&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-latex" data-lang="latex"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nb"&gt;{{&lt;/span&gt;&amp;lt; math &amp;gt;&lt;span class="nb"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sb"&gt;$$&lt;/span&gt;&lt;span class="nb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nv"&gt;\gamma&lt;/span&gt;&lt;span class="nb"&gt;_{n} &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt; &lt;/span&gt;&lt;span class="nv"&gt;\frac&lt;/span&gt;&lt;span class="nb"&gt;{ &lt;/span&gt;&lt;span class="nv"&gt;\left&lt;/span&gt;&lt;span class="nb"&gt; | &lt;/span&gt;&lt;span class="nv"&gt;\left&lt;/span&gt;&lt;span class="nb"&gt; &lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;\mathbf&lt;/span&gt;&lt;span class="nb"&gt; x_{n} &lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nb"&gt; &lt;/span&gt;&lt;span class="nv"&gt;\mathbf&lt;/span&gt;&lt;span class="nb"&gt; x_{n&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="nb"&gt;} &lt;/span&gt;&lt;span class="nv"&gt;\right&lt;/span&gt;&lt;span class="nb"&gt; &lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt;^T &lt;/span&gt;&lt;span class="nv"&gt;\left&lt;/span&gt;&lt;span class="nb"&gt; &lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;\nabla&lt;/span&gt;&lt;span class="nb"&gt; F &lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;\mathbf&lt;/span&gt;&lt;span class="nb"&gt; x_{n}&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nb"&gt; &lt;/span&gt;&lt;span class="nv"&gt;\nabla&lt;/span&gt;&lt;span class="nb"&gt; F &lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;\mathbf&lt;/span&gt;&lt;span class="nb"&gt; x_{n&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="nb"&gt;}&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt; &lt;/span&gt;&lt;span class="nv"&gt;\right&lt;/span&gt;&lt;span class="nb"&gt; &lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="nb"&gt; &lt;/span&gt;&lt;span class="nv"&gt;\right&lt;/span&gt;&lt;span class="nb"&gt; |}{&lt;/span&gt;&lt;span class="nv"&gt;\left&lt;/span&gt;&lt;span class="nb"&gt; &lt;/span&gt;&lt;span class="nv"&gt;\|\nabla&lt;/span&gt;&lt;span class="nb"&gt; F&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;\mathbf&lt;/span&gt;&lt;span class="nb"&gt;{x}_{n}&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nb"&gt; &lt;/span&gt;&lt;span class="nv"&gt;\nabla&lt;/span&gt;&lt;span class="nb"&gt; F&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;\mathbf&lt;/span&gt;&lt;span class="nb"&gt;{x}_{n&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="nb"&gt;}&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt; &lt;/span&gt;&lt;span class="nv"&gt;\right&lt;/span&gt;&lt;span class="nb"&gt; &lt;/span&gt;&lt;span class="nv"&gt;\|&lt;/span&gt;&lt;span class="nb"&gt;^&lt;/span&gt;&lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="nb"&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt;$$&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nb"&gt;{{&lt;/span&gt;&amp;lt; /math &amp;gt;&lt;span class="nb"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;renders as&lt;/p&gt;
$$\gamma_{n} = \frac{ \left | \left (\mathbf x_{n} - \mathbf x_{n-1} \right )^T \left [\nabla F (\mathbf x_{n}) - \nabla F (\mathbf x_{n-1}) \right ] \right |}{\left \|\nabla F(\mathbf{x}_{n}) - \nabla F(\mathbf{x}_{n-1}) \right \|^2}$$
&lt;p&gt;Example &lt;strong&gt;inline math&lt;/strong&gt; &lt;code&gt;{{&amp;lt; math &amp;gt;}}$\nabla F(\mathbf{x}_{n})${{&amp;lt; /math &amp;gt;}}&lt;/code&gt; renders as
$\nabla F(\mathbf{x}_{n})$.&lt;/p&gt;
&lt;p&gt;Example &lt;strong&gt;multi-line math&lt;/strong&gt; using the math linebreak (&lt;code&gt;\\&lt;/code&gt;):&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-latex" data-lang="latex"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nb"&gt;{{&lt;/span&gt;&amp;lt; math &amp;gt;&lt;span class="nb"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sb"&gt;$$&lt;/span&gt;&lt;span class="nb"&gt;f&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;k;p_{&lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="nb"&gt;}^{&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="nb"&gt;}&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt; &lt;/span&gt;&lt;span class="nv"&gt;\begin&lt;/span&gt;&lt;span class="nb"&gt;{cases}p_{&lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="nb"&gt;}^{&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="nb"&gt;} &amp;amp; &lt;/span&gt;&lt;span class="nv"&gt;\text&lt;/span&gt;&lt;span class="nb"&gt;{if }k&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="nb"&gt;, &lt;/span&gt;&lt;span class="nv"&gt;\\&lt;/span&gt;&lt;span class="nb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nb"&gt;p_{&lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="nb"&gt;}^{&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="nb"&gt;} &amp;amp; &lt;/span&gt;&lt;span class="nv"&gt;\text&lt;/span&gt;&lt;span class="nb"&gt;{if }k&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;&lt;span class="nv"&gt;\end&lt;/span&gt;&lt;span class="nb"&gt;{cases}&lt;/span&gt;&lt;span class="s"&gt;$$&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nb"&gt;{{&lt;/span&gt;&amp;lt; /math &amp;gt;&lt;span class="nb"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;renders as&lt;/p&gt;
$$
f(k;p_{0}^{*}) = \begin{cases}p_{0}^{*} &amp; \text{if }k=1, \\
1-p_{0}^{*} &amp; \text{if }k=0.\end{cases}
$$
&lt;h2 id="code"&gt;Code&lt;/h2&gt;
&lt;p&gt;Hugo Blox Builder utilises Hugo&amp;rsquo;s Markdown extension for highlighting code syntax. The code theme can be selected in the &lt;code&gt;config/_default/params.yaml&lt;/code&gt; file.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;```python
import pandas as pd
data = pd.read_csv(&amp;quot;data.csv&amp;quot;)
data.head()
```
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;renders as&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-python" data-lang="python"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;pandas&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nn"&gt;pd&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;read_csv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;data.csv&amp;#34;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;head&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="inline-images"&gt;Inline Images&lt;/h2&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-go" data-lang="go"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;{{&amp;lt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;icon&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;&amp;#34;python&amp;#34;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;&amp;gt;}}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Python&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;renders as&lt;/p&gt;
&lt;p&gt;
&lt;i class="fas fa-python pr-1 fa-fw"&gt;&lt;/i&gt; Python&lt;/p&gt;
&lt;h2 id="did-you-find-this-page-helpful-consider-sharing-it-"&gt;Did you find this page helpful? Consider sharing it 🙌&lt;/h2&gt;</description></item><item><title>Learn Python</title><link>https://mode-demo.github.io/teaching/python/</link><pubDate>Tue, 24 Oct 2023 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/teaching/python/</guid><description>&lt;p&gt;&lt;a href="https://hugoblox.com" target="_blank" rel="noopener"&gt;Hugo Blox Builder&lt;/a&gt; is designed to give technical content creators a seamless experience. You can focus on the content and the Hugo Blox Builder which this template is built upon handles the rest.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Embed videos, podcasts, code, LaTeX math, and even test students!&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;On this page, you&amp;rsquo;ll find some examples of the types of technical content that can be rendered with Hugo Blox.&lt;/p&gt;
&lt;h2 id="video"&gt;Video&lt;/h2&gt;
&lt;p&gt;Teach your course by sharing videos with your students. Choose from one of the following approaches:&lt;/p&gt;
&lt;div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;"&gt;
&lt;iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share; fullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube.com/embed/D2vj0WcvH5c?autoplay=0&amp;amp;controls=1&amp;amp;end=0&amp;amp;loop=0&amp;amp;mute=0&amp;amp;start=0" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube video"&gt;&lt;/iframe&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Youtube&lt;/strong&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;{{&amp;lt; youtube w7Ft2ymGmfc &amp;gt;}}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Bilibili&lt;/strong&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;{{&amp;lt; bilibili id=&amp;quot;BV1WV4y1r7DF&amp;quot; &amp;gt;}}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Video file&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Videos may be added to a page by either placing them in your &lt;code&gt;assets/media/&lt;/code&gt; media library or in your &lt;a href="https://gohugo.io/content-management/page-bundles/" target="_blank" rel="noopener"&gt;page&amp;rsquo;s folder&lt;/a&gt;, and then embedding them with the &lt;em&gt;video&lt;/em&gt; shortcode:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;{{&amp;lt; video src=&amp;quot;my_video.mp4&amp;quot; controls=&amp;quot;yes&amp;quot; &amp;gt;}}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2 id="podcast"&gt;Podcast&lt;/h2&gt;
&lt;p&gt;You can add a podcast or music to a page by placing the MP3 file in the page&amp;rsquo;s folder or the media library folder and then embedding the audio on your page with the &lt;em&gt;audio&lt;/em&gt; shortcode:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;{{&amp;lt; audio src=&amp;quot;ambient-piano.mp3&amp;quot; &amp;gt;}}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Try it out:&lt;/p&gt;
&lt;audio controls &gt;
&lt;source src="https://mode-demo.github.io/teaching/python/ambient-piano.mp3" type="audio/mpeg"&gt;
&lt;/audio&gt;
&lt;h2 id="test-students"&gt;Test students&lt;/h2&gt;
&lt;p&gt;Provide a simple yet fun self-assessment by revealing the solutions to challenges with the &lt;code&gt;spoiler&lt;/code&gt; shortcode:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-markdown" data-lang="markdown"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;{{&amp;lt; spoiler text=&amp;#34;👉 Click to view the solution&amp;#34; &amp;gt;}}
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;You found me!
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;{{&amp;lt; /spoiler &amp;gt;}}
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;renders as&lt;/p&gt;
&lt;details class="spoiler " id="spoiler-2"&gt;
&lt;summary&gt;👉 Click to view the solution&lt;/summary&gt;
&lt;p&gt;You found me 🎉&lt;/p&gt;
&lt;/details&gt;
&lt;h2 id="math"&gt;Math&lt;/h2&gt;
&lt;p&gt;Hugo Blox Builder supports a Markdown extension for $\LaTeX$ math. You can enable this feature by toggling the &lt;code&gt;math&lt;/code&gt; option in your &lt;code&gt;config/_default/params.yaml&lt;/code&gt; file.&lt;/p&gt;
&lt;p&gt;To render &lt;em&gt;inline&lt;/em&gt; or &lt;em&gt;block&lt;/em&gt; math, wrap your LaTeX math with &lt;code&gt;{{&amp;lt; math &amp;gt;}}$...${{&amp;lt; /math &amp;gt;}}&lt;/code&gt; or &lt;code&gt;{{&amp;lt; math &amp;gt;}}$$...$${{&amp;lt; /math &amp;gt;}}&lt;/code&gt;, respectively.&lt;/p&gt;
&lt;div class="alert alert-note"&gt;
&lt;div&gt;
We wrap the LaTeX math in the Hugo Blox &lt;em&gt;math&lt;/em&gt; shortcode to prevent Hugo rendering our math as Markdown.
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Example &lt;strong&gt;math block&lt;/strong&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-latex" data-lang="latex"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nb"&gt;{{&lt;/span&gt;&amp;lt; math &amp;gt;&lt;span class="nb"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sb"&gt;$$&lt;/span&gt;&lt;span class="nb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nv"&gt;\gamma&lt;/span&gt;&lt;span class="nb"&gt;_{n} &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt; &lt;/span&gt;&lt;span class="nv"&gt;\frac&lt;/span&gt;&lt;span class="nb"&gt;{ &lt;/span&gt;&lt;span class="nv"&gt;\left&lt;/span&gt;&lt;span class="nb"&gt; | &lt;/span&gt;&lt;span class="nv"&gt;\left&lt;/span&gt;&lt;span class="nb"&gt; &lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;\mathbf&lt;/span&gt;&lt;span class="nb"&gt; x_{n} &lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nb"&gt; &lt;/span&gt;&lt;span class="nv"&gt;\mathbf&lt;/span&gt;&lt;span class="nb"&gt; x_{n&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="nb"&gt;} &lt;/span&gt;&lt;span class="nv"&gt;\right&lt;/span&gt;&lt;span class="nb"&gt; &lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt;^T &lt;/span&gt;&lt;span class="nv"&gt;\left&lt;/span&gt;&lt;span class="nb"&gt; &lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;\nabla&lt;/span&gt;&lt;span class="nb"&gt; F &lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;\mathbf&lt;/span&gt;&lt;span class="nb"&gt; x_{n}&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nb"&gt; &lt;/span&gt;&lt;span class="nv"&gt;\nabla&lt;/span&gt;&lt;span class="nb"&gt; F &lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;\mathbf&lt;/span&gt;&lt;span class="nb"&gt; x_{n&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="nb"&gt;}&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt; &lt;/span&gt;&lt;span class="nv"&gt;\right&lt;/span&gt;&lt;span class="nb"&gt; &lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="nb"&gt; &lt;/span&gt;&lt;span class="nv"&gt;\right&lt;/span&gt;&lt;span class="nb"&gt; |}{&lt;/span&gt;&lt;span class="nv"&gt;\left&lt;/span&gt;&lt;span class="nb"&gt; &lt;/span&gt;&lt;span class="nv"&gt;\|\nabla&lt;/span&gt;&lt;span class="nb"&gt; F&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;\mathbf&lt;/span&gt;&lt;span class="nb"&gt;{x}_{n}&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nb"&gt; &lt;/span&gt;&lt;span class="nv"&gt;\nabla&lt;/span&gt;&lt;span class="nb"&gt; F&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;\mathbf&lt;/span&gt;&lt;span class="nb"&gt;{x}_{n&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="nb"&gt;}&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt; &lt;/span&gt;&lt;span class="nv"&gt;\right&lt;/span&gt;&lt;span class="nb"&gt; &lt;/span&gt;&lt;span class="nv"&gt;\|&lt;/span&gt;&lt;span class="nb"&gt;^&lt;/span&gt;&lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="nb"&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="s"&gt;$$&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nb"&gt;{{&lt;/span&gt;&amp;lt; /math &amp;gt;&lt;span class="nb"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;renders as&lt;/p&gt;
$$\gamma_{n} = \frac{ \left | \left (\mathbf x_{n} - \mathbf x_{n-1} \right )^T \left [\nabla F (\mathbf x_{n}) - \nabla F (\mathbf x_{n-1}) \right ] \right |}{\left \|\nabla F(\mathbf{x}_{n}) - \nabla F(\mathbf{x}_{n-1}) \right \|^2}$$
&lt;p&gt;Example &lt;strong&gt;inline math&lt;/strong&gt; &lt;code&gt;{{&amp;lt; math &amp;gt;}}$\nabla F(\mathbf{x}_{n})${{&amp;lt; /math &amp;gt;}}&lt;/code&gt; renders as
$\nabla F(\mathbf{x}_{n})$.&lt;/p&gt;
&lt;p&gt;Example &lt;strong&gt;multi-line math&lt;/strong&gt; using the math linebreak (&lt;code&gt;\\&lt;/code&gt;):&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-latex" data-lang="latex"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nb"&gt;{{&lt;/span&gt;&amp;lt; math &amp;gt;&lt;span class="nb"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="sb"&gt;$$&lt;/span&gt;&lt;span class="nb"&gt;f&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;k;p_{&lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="nb"&gt;}^{&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="nb"&gt;}&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt; &lt;/span&gt;&lt;span class="nv"&gt;\begin&lt;/span&gt;&lt;span class="nb"&gt;{cases}p_{&lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="nb"&gt;}^{&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="nb"&gt;} &amp;amp; &lt;/span&gt;&lt;span class="nv"&gt;\text&lt;/span&gt;&lt;span class="nb"&gt;{if }k&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="nb"&gt;, &lt;/span&gt;&lt;span class="nv"&gt;\\&lt;/span&gt;&lt;span class="nb"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nb"&gt;p_{&lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="nb"&gt;}^{&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="nb"&gt;} &amp;amp; &lt;/span&gt;&lt;span class="nv"&gt;\text&lt;/span&gt;&lt;span class="nb"&gt;{if }k&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;&lt;span class="nv"&gt;\end&lt;/span&gt;&lt;span class="nb"&gt;{cases}&lt;/span&gt;&lt;span class="s"&gt;$$&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="nb"&gt;{{&lt;/span&gt;&amp;lt; /math &amp;gt;&lt;span class="nb"&gt;}}&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;renders as&lt;/p&gt;
$$
f(k;p_{0}^{*}) = \begin{cases}p_{0}^{*} &amp; \text{if }k=1, \\
1-p_{0}^{*} &amp; \text{if }k=0.\end{cases}
$$
&lt;h2 id="code"&gt;Code&lt;/h2&gt;
&lt;p&gt;Hugo Blox Builder utilises Hugo&amp;rsquo;s Markdown extension for highlighting code syntax. The code theme can be selected in the &lt;code&gt;config/_default/params.yaml&lt;/code&gt; file.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;```python
import pandas as pd
data = pd.read_csv(&amp;quot;data.csv&amp;quot;)
data.head()
```
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;renders as&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-python" data-lang="python"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;pandas&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nn"&gt;pd&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;read_csv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;data.csv&amp;#34;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;head&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id="inline-images"&gt;Inline Images&lt;/h2&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-go" data-lang="go"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;{{&amp;lt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;icon&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;&amp;#34;python&amp;#34;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;&amp;gt;}}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;Python&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;renders as&lt;/p&gt;
&lt;p&gt;
&lt;i class="fas fa-python pr-1 fa-fw"&gt;&lt;/i&gt; Python&lt;/p&gt;
&lt;h2 id="did-you-find-this-page-helpful-consider-sharing-it-"&gt;Did you find this page helpful? Consider sharing it 🙌&lt;/h2&gt;</description></item><item><title>Data-Driven Decision-Making Algorithms</title><link>https://mode-demo.github.io/project/algorithms/</link><pubDate>Wed, 04 Oct 2023 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/project/algorithms/</guid><description>&lt;!-- A main research direction for the AIR-DREAM Lab is to develop high-performance, robust, generalizable, and real-world deployable data-driven decision-making algorithms. We are specifically interested in offline policy learning methods, such as offline reinforcement learning (RL), offline imitation learning (IL), and offline planning, which enable a simulation-free and low-cost solution to address many real-world problems.
Our current research focus include:
- Sample-efficient / high-generalization offline RL / IL / planning algorithms
- Foundation models for decision-making
- Safe offline RL algorithms
- Hybrid RL that combines offline and online policy learning
- Offline policy learning under imperfect reward
- Feedback-efficient RLHF --&gt;
&lt;div style="font-family: Helvetica, sans-serif; max-width: 960px; margin: 0 auto; padding: 20px; line-height: 1.6; color: #333;"&gt;
&lt;div style="
padding: 2px;
border-radius: 12px;
background: linear-gradient(135deg, #e0f2fe, #ecfdf5);
box-shadow: 0 4px 12px rgba(0,0,0,0.05);
"&gt;
&lt;div style="
background: white;
border-radius: 10px;
padding: 20px;
"&gt;
&lt;p style="
font-size: 18px;
line-height: 1.7;
color: #1e293b;
margin: 0;
"&gt;
A main research direction for the AIR-DREAM Lab is to develop high-performance, robust, generalizable, and real-world deployable data-driven decision-making algorithms. We are specifically interested in offline policy learning methods, such as offline reinforcement learning (RL), offline imitation learning (IL), and offline planning, which enable a simulation-free and low-cost solution to address many real-world problems.
&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 style="margin-top: 24px; color: #00bcd4; font-size: 24px;"&gt;Our current research focus includes:&lt;/h3&gt;
&lt;!-- 卡片式布局 --&gt;
&lt;div style="display: grid; grid-template-columns: repeat(auto-fill, minmax(280px, 1fr)); gap: 24px; margin-top: 24px;"&gt;
&lt;div style="background: white; border-radius: 12px; padding: 24px; box-shadow: 0 5px 15px rgba(0, 0, 0, 0.05); transition: transform 0.3s ease; border-left: 4px solid #00bcd4;"&gt;
&lt;h4 style="margin-top: 0; margin-bottom: 12px; color: #222; font-size: 18px;"&gt;Sample-efficient / high-generalization offline RL / IL / planning algorithms&lt;/h4&gt;
&lt;!-- &lt;p style="margin: 0; font-size: 16px; color: #555;"&gt;Data-driven control optimization for complex industrial systems&lt;/p&gt; --&gt;
&lt;/div&gt;
&lt;div style="background: white; border-radius: 12px; padding: 24px; box-shadow: 0 5px 15px rgba(0, 0, 0, 0.05); transition: transform 0.3s ease; border-left: 4px solid #4caf50;"&gt;
&lt;h4 style="margin-top: 0; margin-bottom: 12px; color: #222; font-size: 18px;"&gt;Foundation models for decision-making&lt;/h4&gt;
&lt;!-- &lt;p style="margin: 0; font-size: 16px; color: #555;"&gt;Energy saving optimization for data centers&lt;/p&gt; --&gt;
&lt;/div&gt;
&lt;div style="background: white; border-radius: 12px; padding: 24px; box-shadow: 0 5px 15px rgba(0, 0, 0, 0.05); transition: transform 0.3s ease; border-left: 4px solid #ff9800;"&gt;
&lt;h4 style="margin-top: 0; margin-bottom: 12px; color: #222; font-size: 18px;"&gt;Safe offline RL algorithms&lt;/h4&gt;
&lt;!-- &lt;p style="margin: 0; font-size: 16px; color: #555;"&gt;Massive MIMO Beamforming optimization for 5G&lt;/p&gt; --&gt;
&lt;/div&gt;
&lt;div style="background: white; border-radius: 12px; padding: 24px; box-shadow: 0 5px 15px rgba(0, 0, 0, 0.05); transition: transform 0.3s ease; border-left: 4px solid rgb(255, 204, 0);"&gt;
&lt;h4 style="margin-top: 0; margin-bottom: 12px; color: #222; font-size: 18px;"&gt;Hybrid RL that combines offline and online policy learning&lt;/h4&gt;
&lt;!-- &lt;p style="margin: 0; font-size: 16px; color: #555;"&gt;Massive MIMO Beamforming optimization for 5G&lt;/p&gt; --&gt;
&lt;/div&gt;
&lt;div style="background: white; border-radius: 12px; padding: 24px; box-shadow: 0 5px 15px rgba(0, 0, 0, 0.05); transition: transform 0.3s ease; border-left: 4px solid #9c27b0;"&gt;
&lt;h4 style="margin-top: 0; margin-bottom: 12px; color: #222; font-size: 18px;"&gt;Offline policy learning under imperfect reward&lt;/h4&gt;
&lt;!-- &lt;p style="margin: 0; font-size: 16px; color: #555;"&gt;Engineering policy integrated hybrid reinforcement learning&lt;/p&gt; --&gt;
&lt;/div&gt;
&lt;div style="background: white; border-radius: 12px; padding: 24px; box-shadow: 0 5px 15px rgba(0, 0, 0, 0.05); transition: transform 0.3s ease; border-left: 4px solid rgb(215, 58, 205);"&gt;
&lt;h4 style="margin-top: 0; margin-bottom: 12px; color: #222; font-size: 18px;"&gt;Feedback-efficient RLHF&lt;/h4&gt;
&lt;!-- &lt;p style="margin: 0; font-size: 16px; color: #555;"&gt;Engineering policy integrated hybrid reinforcement learning&lt;/p&gt; --&gt;
&lt;/div&gt;
&lt;/div&gt;</description></item><item><title>Learning-based Methods for Robotics &amp; Autonomous Driving</title><link>https://mode-demo.github.io/project/robotics/</link><pubDate>Tue, 03 Oct 2023 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/project/robotics/</guid><description>&lt;!-- We focus on developing robotic control and autonomous driving policy learning methods that could directly learn from real-world data, bypassing or alleviating sim-to-real gap, while achieving robust and generalizable performance.
Our current research focus include:
- Offline RL / IL / planning methods for autonomous driving and robotic control
- Offline policy optimization for safety-critical scenarios
- Foundation models for robotic control
- Sim-to-real adaptation
**Latest research**:
- [Diffusion-Planner: Diffusion-Based Planning for Autonomous Driving with Flexible Guidance](../../publication/zheng-2025-diffusion/) --&gt;
&lt;div style="font-family: Helvetica, sans-serif; max-width: 960px; margin: 0 auto; padding: 20px; line-height: 1.6; color: #333;"&gt;
&lt;div style="
padding: 2px;
border-radius: 12px;
background: linear-gradient(135deg, #e0f2fe, #ecfdf5);
box-shadow: 0 4px 12px rgba(0,0,0,0.05);
"&gt;
&lt;div style="
background: white;
border-radius: 10px;
padding: 20px;
"&gt;
&lt;p style="
font-size: 18px;
line-height: 1.7;
color: #1e293b;
margin: 0;
"&gt;
We focus on developing robotic control and autonomous driving policy learning methods that could directly learn from real-world data, bypassing or alleviating sim-to-real gap, while achieving robust and generalizable performance.
&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 style="margin-top: 24px; color: #00bcd4; font-size: 24px;"&gt;Our current research focus includes:&lt;/h3&gt;
&lt;!-- 卡片式布局 --&gt;
&lt;div style="display: grid; grid-template-columns: repeat(auto-fill, minmax(280px, 1fr)); gap: 24px; margin-top: 24px;"&gt;
&lt;div style="background: white; border-radius: 12px; padding: 24px; box-shadow: 0 5px 15px rgba(0, 0, 0, 0.05); transition: transform 0.3s ease; border-left: 4px solid #00bcd4;"&gt;
&lt;h4 style="margin-top: 0; margin-bottom: 12px; color: #222; font-size: 18px;"&gt;Offline RL / IL / planning methods for autonomous driving and robotic control&lt;/h4&gt;
&lt;/div&gt;
&lt;div style="background: white; border-radius: 12px; padding: 24px; box-shadow: 0 5px 15px rgba(0, 0, 0, 0.05); transition: transform 0.3s ease; border-left: 4px solid #4caf50;"&gt;
&lt;h4 style="margin-top: 0; margin-bottom: 12px; color: #222; font-size: 18px;"&gt;Offline policy optimization for safety-critical scenarios&lt;/h4&gt;
&lt;/div&gt;
&lt;div style="background: white; border-radius: 12px; padding: 24px; box-shadow: 0 5px 15px rgba(0, 0, 0, 0.05); transition: transform 0.3s ease; border-left: 4px solid #ff9800;"&gt;
&lt;h4 style="margin-top: 0; margin-bottom: 12px; color: #222; font-size: 18px;"&gt;Foundation models for robotic control&lt;/h4&gt;
&lt;/div&gt;
&lt;div style="background: white; border-radius: 12px; padding: 24px; box-shadow: 0 5px 15px rgba(0, 0, 0, 0.05); transition: transform 0.3s ease; border-left: 4px solid #9c27b0;"&gt;
&lt;h4 style="margin-top: 0; margin-bottom: 12px; color: #222; font-size: 18px;"&gt;Sim-to-real adaptation&lt;/h4&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div align="center" style="font-family: Helvetica, sans-serif; margin-bottom: 1em; margin-top: 60px;"&gt;
&lt;h1 style="color: #00bcd4; text-transform: uppercase; font-size: 40px; margin: 0;"&gt;Latest Achievement&lt;/h1&gt;
&lt;div class="card"&gt;
&lt;h3 style="color: #121212; font-size: 24px; font-weight: bold; margin: 0.3em 0 1em;"&gt;
&lt;a href="../../publication/zheng-2025-xvla/" style="color:rgb(212, 191, 55);"&gt;X-VLA has won First Place in the AGIBOT World Challenge (Manipulation track) @ IROS 2025!&lt;/a&gt;&lt;/h3&gt;
&lt;/div&gt;
&lt;div class="card"&gt;
&lt;h3 style="color: #121212; font-size: 24px; font-weight: bold; margin: 0.3em 0 1em;"&gt;
&lt;a href="../../publication/zheng-2025-diffusion/" style="color:rgb(13, 181, 227);"&gt;Diffusion-Planner: Diffusion-Based Planning for Autonomous Driving with Flexible Guidance&lt;/a&gt;&lt;/h3&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;style&gt;
.card {
background: white;
border-radius: 12px;
padding: 5px;
box-shadow: 0 5px 15px rgba(0, 0, 0, 0.05);
transition: transform 0.3s ease;
border: none;
}
/* 鼠标悬停时的效果 */
.card:hover {
transform: scale(1.05); /* 放大 */
box-shadow: 0 10px 25px rgba(0, 0, 0, 0.15); /* 阴影更明显 */
}
&lt;/style&gt;</description></item><item><title>Data-Driven Methods for Sustainable Industrial and AIoT Systems</title><link>https://mode-demo.github.io/project/aiot/</link><pubDate>Mon, 02 Oct 2023 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/project/aiot/</guid><description>&lt;!-- Conventional industrial systems and emerging systems such as data centers, 5G communication networks consume enormous amount of energy and non-renewable resources. We focus on developing advanced data-driven AI methods to optimize real-world complex industrial and AIoT systems. Helping the related industries to improve operation efficiency, save energy, reduce emission, and ultimately achieving the goal of green and sustanable development.
Our current research focus include:
- Simulator-free data-driven control optimization for complex industrial systems
- Energy saving optimization for data centers
- 5G Massive MIMO Beamforming optimization
- Engineering policy integrated hybrid RL --&gt;
&lt;div style="font-family: Helvetica, sans-serif; max-width: 960px; margin: 0 auto; padding: 20px; line-height: 1.6; color: #333;"&gt;
&lt;!-- &lt;p style="font-size: 18px;"&gt;
Conventional industrial systems and emerging systems such as data centers, 5G communication networks consume enormous amount of energy and non-renewable resources.
We focus on developing advanced data-driven AI methods to optimize real-world complex industrial and AIoT systems.
Helping the related industries to improve operation efficiency, save energy, reduce emission, and ultimately achieving the goal of green and sustainable development.
&lt;/p&gt; --&gt;
&lt;div style="
padding: 2px;
border-radius: 12px;
background: linear-gradient(135deg, #e0f2fe, #ecfdf5);
box-shadow: 0 4px 12px rgba(0,0,0,0.05);
"&gt;
&lt;div style="
background: white;
border-radius: 10px;
padding: 20px;
"&gt;
&lt;p style="
font-size: 18px;
line-height: 1.7;
color: #1e293b;
margin: 0;
"&gt;
Conventional industrial systems and emerging systems such as data centers, 5G communication networks consume enormous amount of energy and non-renewable resources.
We focus on developing advanced data-driven AI methods to optimize real-world complex industrial and AIoT systems.
Helping the related industries to improve operation efficiency, save energy, reduce emission, and ultimately achieving the goal of green and sustainable development.
&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 style="margin-top: 24px; color: #00bcd4; font-size: 24px;"&gt;Our current research focus includes:&lt;/h3&gt;
&lt;!-- 卡片式布局 --&gt;
&lt;div style="display: grid; grid-template-columns: repeat(auto-fill, minmax(280px, 1fr)); gap: 24px; margin-top: 24px;"&gt;
&lt;div style="background: white; border-radius: 12px; padding: 24px; box-shadow: 0 5px 15px rgba(0, 0, 0, 0.05); transition: transform 0.3s ease; border-left: 4px solid #00bcd4;"&gt;
&lt;h4 style="margin-top: 0; margin-bottom: 12px; color: #222; font-size: 18px;"&gt;Simulator-Free Optimization&lt;/h4&gt;
&lt;p style="margin: 0; font-size: 16px; color: #555;"&gt;Data-driven control optimization for complex industrial systems&lt;/p&gt;
&lt;/div&gt;
&lt;div style="background: white; border-radius: 12px; padding: 24px; box-shadow: 0 5px 15px rgba(0, 0, 0, 0.05); transition: transform 0.3s ease; border-left: 4px solid #4caf50;"&gt;
&lt;h4 style="margin-top: 0; margin-bottom: 12px; color: #222; font-size: 18px;"&gt;Data Center Efficiency&lt;/h4&gt;
&lt;p style="margin: 0; font-size: 16px; color: #555;"&gt;Energy saving optimization for data centers&lt;/p&gt;
&lt;/div&gt;
&lt;div style="background: white; border-radius: 12px; padding: 24px; box-shadow: 0 5px 15px rgba(0, 0, 0, 0.05); transition: transform 0.3s ease; border-left: 4px solid #ff9800;"&gt;
&lt;h4 style="margin-top: 0; margin-bottom: 12px; color: #222; font-size: 18px;"&gt;5G Beamforming&lt;/h4&gt;
&lt;p style="margin: 0; font-size: 16px; color: #555;"&gt;Massive MIMO Beamforming optimization for 5G&lt;/p&gt;
&lt;/div&gt;
&lt;div style="background: white; border-radius: 12px; padding: 24px; box-shadow: 0 5px 15px rgba(0, 0, 0, 0.05); transition: transform 0.3s ease; border-left: 4px solid #9c27b0;"&gt;
&lt;h4 style="margin-top: 0; margin-bottom: 12px; color: #222; font-size: 18px;"&gt;Hybrid RL&lt;/h4&gt;
&lt;p style="margin: 0; font-size: 16px; color: #555;"&gt;Engineering policy integrated hybrid reinforcement learning&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div align="center" style="font-family: Helvetica, sans-serif; margin-bottom: 1em; margin-top: 60px;"&gt;
&lt;h2 style="color: #00bcd4; text-transform: uppercase; font-size: 40px; margin: 0;"&gt;Latest Achievement&lt;/h2&gt;
&lt;h1 style="color: #222; font-size: 28px; font-weight: bold; margin: 0.3em 0 1em;"&gt;Data Center Cooling System Optimization&lt;/h1&gt;
&lt;/div&gt;
&lt;!-- &lt;div align="center"&gt;
&lt;iframe
src="https://player.bilibili.com/player.html?bvid=BV1ADMcz2EYf&amp;autoplay=1&amp;loop=1"
allowfullscreen
style="width: 100%; max-width: 960px; aspect-ratio: 16/9; border: 0; border-radius: 20px; box-shadow: 0 4px 20px rgba(0,0,0,0.1);"&gt;
&lt;/iframe&gt;
&lt;/div&gt; --&gt;
&lt;div align="center" style="
position: relative;
overflow: hidden;
border-radius: 20px;
box-shadow: 0 4px 20px rgba(0,0,0,0.1);
display: inline-block;
background: #fff; /* 确保背景色一致 */
line-height: 0; /* 消除行高影响 */
font-size: 0; /* 消除字体大小间隙 */
width: 100%;
max-width: 960px;
"&gt;
&lt;iframe
src="https://player.bilibili.com/player.html?bvid=BV1ADMcz2EYf&amp;autoplay=1&amp;loop=1"
allowfullscreen
style="
display: block;
width: 100%;
height: auto;
aspect-ratio: 16/9;
border: 0;
border-radius: 20px;
background: #fff;
transform: translateZ(0);
vertical-align: bottom; /* 消除底部间隙 */
"&gt;
&lt;/iframe&gt;
&lt;!-- 边界覆盖层 - 确保边缘完美 --&gt;
&lt;div style="
position: absolute;
top: 0;
left: 0;
right: 0;
height: 1px;
background: #fff;
z-index: 10;
pointer-events: none;
"&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;</description></item><item><title>Tools &amp; Libraries</title><link>https://mode-demo.github.io/project/libs/</link><pubDate>Sun, 01 Oct 2023 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/project/libs/</guid><description>&lt;div style="font-family: Helvetica, sans-serif; max-width: 960px; margin: 0 auto; padding: 20px; line-height: 1.6; color: #333;"&gt;
&lt;div style="
padding: 2px;
border-radius: 12px;
background: linear-gradient(135deg, #e0f2fe, #ecfdf5);
box-shadow: 0 4px 12px rgba(0,0,0,0.05);
"&gt;
&lt;div style="
background: white;
border-radius: 10px;
padding: 20px;
"&gt;
&lt;p style="
font-size: 18px;
line-height: 1.7;
color: #1e293b;
margin: 0;
"&gt;
We provide open code implementations for most of our research, please check our papers for related codes. In addition, we aim to develop easy-to-use and comprehensive algorithm libraries and tools to accelerate the real-world deployment of advanced data-driven decision-making methods.
&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 style="margin-top: 24px; color: #00bcd4; font-size: 24px; text-align: center;"&gt;Data-Drivien Decision-Making Libraries / Tools&lt;/h3&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img alt="screen reader text" srcset="
/project/libs/d2c-logo_hu_5d40481b3d148996.webp 400w,
/project/libs/d2c-logo_hu_55cf71f6467108d1.webp 760w,
/project/libs/d2c-logo_hu_169fc8daa277fe2a.webp 1200w"
src="https://mode-demo.github.io/project/libs/d2c-logo_hu_5d40481b3d148996.webp"
width="339"
height="123"
loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;div style="font-family: Helvetica, sans-serif; max-width: 960px; margin: 0 auto; padding: 20px; line-height: 1.6; color: #333;"&gt;
&lt;div style="
padding: 2px;
border-radius: 12px;
background: linear-gradient(135deg, #e0f2fe, #ecfdf5);
box-shadow: 0 4px 12px rgba(0,0,0,0.05);
"&gt;
&lt;div style="
background: white;
border-radius: 10px;
padding: 20px;
"&gt;
&lt;p style="
font-size: 18px;
line-height: 1.7;
color: #1e293b;
margin: 0;
"&gt;
&lt;a href="https://github.com/AIR-DI/D2C"&gt;Data-Driven Control Lib (D2C)&lt;/a&gt; is a library for data-driven decision-making &amp; control based on state-of-the-art offline reinforcement learning (RL), offline imitation learning (IL), and offline planning algorithms. It is a platform for solving various decision-making &amp; control problems in real-world scenarios. D2C is designed to offer fast and convenient algorithm performance development and testing, as well as providing easy-to-use toolchains to accelerate the real-world deployment of SOTA data-driven decision-making methods.
&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 style="margin-top: 24px; color:rgb(94, 120, 225); font-size: 20px;"&gt;The current supported offline RL/IL algorithms include (more to come):&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://arxiv.org/pdf/2106.06860.pdf" target="_blank" rel="noopener"&gt;Twin Delayed DDPG with Behavior Cloning (TD3+BC)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://arxiv.org/abs/2205.11027.pdf" target="_blank" rel="noopener"&gt;Distance-Sensitive Offline Reinforcement Learning (DOGE)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://arxiv.org/abs/2206.13464.pdf" target="_blank" rel="noopener"&gt;Dynamics-Aware Hybrid Offline-and-Online Reinforcement Learning (H2O)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://arxiv.org/abs/2303.15810" target="_blank" rel="noopener"&gt;Sparse Q-learning (SQL)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://arxiv.org/abs/2210.08323" target="_blank" rel="noopener"&gt;Policy-guided Offline RL (POR)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://arxiv.org/pdf/2110.06169.pdf" target="_blank" rel="noopener"&gt;Offline Reinforcement Learning with Implicit Q-Learning (IQL)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://arxiv.org/abs/2207.00244" target="_blank" rel="noopener"&gt;Discriminator-Guided Model-Based Offline Imitation Learning (DMIL)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://www.cse.unsw.edu.au/~claude/papers/MI15.pdf" target="_blank" rel="noopener"&gt;Behavior Cloning (BC)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 style="margin-top: 24px; color:rgb(94, 120, 225); font-size: 20px;"&gt;Features:&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;D2C includes a large collection of offline RL and IL algorithms: model-free and model-based offline RL/IL algorithms, as well as planning methods.&lt;/li&gt;
&lt;li&gt;D2C is highly modular and extensible. You can easily build custom algorithms and conduct experiments with it.&lt;/li&gt;
&lt;li&gt;D2C automates the development process in real-world control applications. It simplifies the steps of problem definition/mathematical formulation, policy training, policy evaluation and model deployment.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 style="margin-top: 24px; color:rgb(94, 120, 225); font-size: 20px;"&gt;Library Information:&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;The library is available in &lt;a href="https://github.com/AIR-DI/D2C" target="_blank" rel="noopener"&gt;https://github.com/AIR-DI/D2C&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;The tutorials and API documentation are hosted on &lt;a href="https://air-d2c.readthedocs.io/" target="_blank" rel="noopener"&gt;air-d2c.readthedocs.io&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 style="margin-top: 24px; color: #00bcd4; font-size: 24px; text-align: center;"&gt;Online RL Library&lt;/h3&gt;
&lt;div style="font-family: Helvetica, sans-serif; max-width: 960px; margin: 0 auto; padding: 20px; line-height: 1.6; color: #333;"&gt;
&lt;div style="
padding: 2px;
border-radius: 12px;
background: linear-gradient(135deg, #e0f2fe, #ecfdf5);
box-shadow: 0 4px 12px rgba(0,0,0,0.05);
"&gt;
&lt;div style="
background: white;
border-radius: 10px;
padding: 20px;
"&gt;
&lt;p style="
font-size: 18px;
line-height: 1.7;
color: #1e293b;
margin: 0;
"&gt;
&lt;a href="https://github.com/imoneoi/onerl"&gt;OneRL&lt;/a&gt;: Event-driven fully distributed reinforcement learning framework proposed in &lt;a href="https://arxiv.org/abs/2110.11573"&gt;"A Versatile and Efficient Reinforcement Learning Approach for Autonomous Driving"&lt;/a&gt; that can facilitate highly efficient policy learning in RL-based tasks.
&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;h3 style="margin-top: 24px; color:rgb(94, 120, 225); font-size: 20px;"&gt;Features:&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Super fast RL training! (15~30min for MuJoCo &amp;amp; Atari on single machine)&lt;/li&gt;
&lt;li&gt;State-of-the-art performance&lt;/li&gt;
&lt;li&gt;Scheduled and pipelined sample collection&lt;/li&gt;
&lt;li&gt;Completely lock-free execution&lt;/li&gt;
&lt;li&gt;Fully distributed architecture&lt;/li&gt;
&lt;li&gt;Full profiling &amp;amp; overhead identification tools&lt;/li&gt;
&lt;li&gt;Online visualization &amp;amp; rendering&lt;/li&gt;
&lt;li&gt;Support multi-GPU parallel training&lt;/li&gt;
&lt;li&gt;Support exporting trained policy to ONNX for faster inference &amp;amp; deployment&lt;/li&gt;
&lt;/ul&gt;</description></item><item><title>Third MODE Workshop on Differentiable Programming for Experiment Design</title><link>https://mode-demo.github.io/events/third_workshop/</link><pubDate>Sat, 24 Jun 2023 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/events/third_workshop/</guid><description>&lt;!--
&lt;div class="alert alert-note"&gt;
&lt;div&gt;
Click on the &lt;strong&gt;Slides&lt;/strong&gt; button above to view the built-in slides feature.
&lt;/div&gt;
&lt;/div&gt;
Slides can be added in a few ways:
- **Create** slides using Hugo Blox Builder's [_Slides_](https://docs.hugoblox.com/reference/content-types/) feature and link using `slides` parameter in the front matter of the talk file
- **Upload** an existing slide deck to `static/` and link using `url_slides` parameter in the front matter of the talk file
- **Embed** your slides (e.g. Google Slides) or presentation video on this page using [shortcodes](https://docs.hugoblox.com/reference/markdown/).
Further event details, including [page elements](https://docs.hugoblox.com/reference/markdown/) such as image galleries, can be added to the body of this page. --&gt;</description></item><item><title>News</title><link>https://mode-demo.github.io/news/</link><pubDate>Mon, 24 Oct 2022 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/news/</guid><description/></item><item><title>Second MODE Workshop on Differentiable Programming for Experiment Design</title><link>https://mode-demo.github.io/events/second_workshop/</link><pubDate>Mon, 12 Sep 2022 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/events/second_workshop/</guid><description>&lt;!--
&lt;div class="alert alert-note"&gt;
&lt;div&gt;
Click on the &lt;strong&gt;Slides&lt;/strong&gt; button above to view the built-in slides feature.
&lt;/div&gt;
&lt;/div&gt;
Slides can be added in a few ways:
- **Create** slides using Hugo Blox Builder's [_Slides_](https://docs.hugoblox.com/reference/content-types/) feature and link using `slides` parameter in the front matter of the talk file
- **Upload** an existing slide deck to `static/` and link using `url_slides` parameter in the front matter of the talk file
- **Embed** your slides (e.g. Google Slides) or presentation video on this page using [shortcodes](https://docs.hugoblox.com/reference/markdown/).
Further event details, including [page elements](https://docs.hugoblox.com/reference/markdown/) such as image galleries, can be added to the body of this page. --&gt;</description></item><item><title>Deep Regression of Muon Energy with a K-Nearest Neighbor Algorithm</title><link>https://mode-demo.github.io/publication/dorigo-2022-deepregressionmuonenergy/</link><pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/dorigo-2022-deepregressionmuonenergy/</guid><description/></item><item><title>Toward the End-to-End Optimization of Particle Physics Instruments with Differentiable Programming: a White Paper</title><link>https://mode-demo.github.io/publication/dorigo-2022-endtoendoptimizationparticlephysics/</link><pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/dorigo-2022-endtoendoptimizationparticlephysics/</guid><description/></item><item><title>First MODE Workshop on Differentiable Programming for Experiment Design</title><link>https://mode-demo.github.io/events/first_workshop/</link><pubDate>Mon, 06 Sep 2021 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/events/first_workshop/</guid><description>&lt;!--
&lt;div class="alert alert-note"&gt;
&lt;div&gt;
Click on the &lt;strong&gt;Slides&lt;/strong&gt; button above to view the built-in slides feature.
&lt;/div&gt;
&lt;/div&gt;
Slides can be added in a few ways:
- **Create** slides using Hugo Blox Builder's [_Slides_](https://docs.hugoblox.com/reference/content-types/) feature and link using `slides` parameter in the front matter of the talk file
- **Upload** an existing slide deck to `static/` and link using `url_slides` parameter in the front matter of the talk file
- **Embed** your slides (e.g. Google Slides) or presentation video on this page using [shortcodes](https://docs.hugoblox.com/reference/markdown/).
Further event details, including [page elements](https://docs.hugoblox.com/reference/markdown/) such as image galleries, can be added to the body of this page. --&gt;</description></item><item><title>Object condensation: one-stage grid-free multi-object reconstruction in physics detectors, graph, and image data</title><link>https://mode-demo.github.io/publication/kieseler-2020/</link><pubDate>Tue, 01 Sep 2020 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/kieseler-2020/</guid><description/></item><item><title>Adversarial Variational Optimization of Non-Differentiable Simulators</title><link>https://mode-demo.github.io/publication/louppe-2020-adversarialvariationaloptimizationnondifferentiable/</link><pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/louppe-2020-adversarialvariationaloptimizationnondifferentiable/</guid><description/></item><item><title>Geometry optimization of a muon-electron scattering detector</title><link>https://mode-demo.github.io/publication/dorigo-2020100022/</link><pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/dorigo-2020100022/</guid><description/></item><item><title>INFERNO: Inference-Aware Neural Optimisation</title><link>https://mode-demo.github.io/publication/decastro-2019170/</link><pubDate>Tue, 01 Jan 2019 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/decastro-2019170/</guid><description/></item><item><title>D2C</title><link>https://mode-demo.github.io/research/example/</link><pubDate>Wed, 27 Apr 2016 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/research/example/</guid><description>&lt;p&gt;Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis posuere tellus ac convallis placerat. Proin tincidunt magna sed ex sollicitudin condimentum. Sed ac faucibus dolor, scelerisque sollicitudin nisi. Cras purus urna, suscipit quis sapien eu, pulvinar tempor diam. Quisque risus orci, mollis id ante sit amet, gravida egestas nisl. Sed ac tempus magna. Proin in dui enim. Donec condimentum, sem id dapibus fringilla, tellus enim condimentum arcu, nec volutpat est felis vel metus. Vestibulum sit amet erat at nulla eleifend gravida.&lt;/p&gt;
&lt;p&gt;Nullam vel molestie justo. Curabitur vitae efficitur leo. In hac habitasse platea dictumst. Sed pulvinar mauris dui, eget varius purus congue ac. Nulla euismod, lorem vel elementum dapibus, nunc justo porta mi, sed tempus est est vel tellus. Nam et enim eleifend, laoreet sem sit amet, elementum sem. Morbi ut leo congue, maximus velit ut, finibus arcu. In et libero cursus, rutrum risus non, molestie leo. Nullam congue quam et volutpat malesuada. Sed risus tortor, pulvinar et dictum nec, sodales non mi. Phasellus lacinia commodo laoreet. Nam mollis, erat in feugiat consectetur, purus eros egestas tellus, in auctor urna odio at nibh. Mauris imperdiet nisi ac magna convallis, at rhoncus ligula cursus.&lt;/p&gt;
&lt;p&gt;Cras aliquam rhoncus ipsum, in hendrerit nunc mattis vitae. Duis vitae efficitur metus, ac tempus leo. Cras nec fringilla lacus. Quisque sit amet risus at ipsum pharetra commodo. Sed aliquam mauris at consequat eleifend. Praesent porta, augue sed viverra bibendum, neque ante euismod ante, in vehicula justo lorem ac eros. Suspendisse augue libero, venenatis eget tincidunt ut, malesuada at lorem. Donec vitae bibendum arcu. Aenean maximus nulla non pretium iaculis. Quisque imperdiet, nulla in pulvinar aliquet, velit quam ultrices quam, sit amet fringilla leo sem vel nunc. Mauris in lacinia lacus.&lt;/p&gt;
&lt;p&gt;Suspendisse a tincidunt lacus. Curabitur at urna sagittis, dictum ante sit amet, euismod magna. Sed rutrum massa id tortor commodo, vitae elementum turpis tempus. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean purus turpis, venenatis a ullamcorper nec, tincidunt et massa. Integer posuere quam rutrum arcu vehicula imperdiet. Mauris ullamcorper quam vitae purus congue, quis euismod magna eleifend. Vestibulum semper vel augue eget tincidunt. Fusce eget justo sodales, dapibus odio eu, ultrices lorem. Duis condimentum lorem id eros commodo, in facilisis mauris scelerisque. Morbi sed auctor leo. Nullam volutpat a lacus quis pharetra. Nulla congue rutrum magna a ornare.&lt;/p&gt;
&lt;p&gt;Aliquam in turpis accumsan, malesuada nibh ut, hendrerit justo. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Quisque sed erat nec justo posuere suscipit. Donec ut efficitur arcu, in malesuada neque. Nunc dignissim nisl massa, id vulputate nunc pretium nec. Quisque eget urna in risus suscipit ultricies. Pellentesque odio odio, tincidunt in eleifend sed, posuere a diam. Nam gravida nisl convallis semper elementum. Morbi vitae felis faucibus, vulputate orci placerat, aliquet nisi. Aliquam erat volutpat. Maecenas sagittis pulvinar purus, sed porta quam laoreet at.&lt;/p&gt;</description></item><item><title>Example Research</title><link>https://mode-demo.github.io/research/example1/</link><pubDate>Wed, 27 Apr 2016 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/research/example1/</guid><description>&lt;p&gt;Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis posuere tellus ac convallis placerat. Proin tincidunt magna sed ex sollicitudin condimentum. Sed ac faucibus dolor, scelerisque sollicitudin nisi. Cras purus urna, suscipit quis sapien eu, pulvinar tempor diam. Quisque risus orci, mollis id ante sit amet, gravida egestas nisl. Sed ac tempus magna. Proin in dui enim. Donec condimentum, sem id dapibus fringilla, tellus enim condimentum arcu, nec volutpat est felis vel metus. Vestibulum sit amet erat at nulla eleifend gravida.&lt;/p&gt;
&lt;p&gt;Nullam vel molestie justo. Curabitur vitae efficitur leo. In hac habitasse platea dictumst. Sed pulvinar mauris dui, eget varius purus congue ac. Nulla euismod, lorem vel elementum dapibus, nunc justo porta mi, sed tempus est est vel tellus. Nam et enim eleifend, laoreet sem sit amet, elementum sem. Morbi ut leo congue, maximus velit ut, finibus arcu. In et libero cursus, rutrum risus non, molestie leo. Nullam congue quam et volutpat malesuada. Sed risus tortor, pulvinar et dictum nec, sodales non mi. Phasellus lacinia commodo laoreet. Nam mollis, erat in feugiat consectetur, purus eros egestas tellus, in auctor urna odio at nibh. Mauris imperdiet nisi ac magna convallis, at rhoncus ligula cursus.&lt;/p&gt;
&lt;p&gt;Cras aliquam rhoncus ipsum, in hendrerit nunc mattis vitae. Duis vitae efficitur metus, ac tempus leo. Cras nec fringilla lacus. Quisque sit amet risus at ipsum pharetra commodo. Sed aliquam mauris at consequat eleifend. Praesent porta, augue sed viverra bibendum, neque ante euismod ante, in vehicula justo lorem ac eros. Suspendisse augue libero, venenatis eget tincidunt ut, malesuada at lorem. Donec vitae bibendum arcu. Aenean maximus nulla non pretium iaculis. Quisque imperdiet, nulla in pulvinar aliquet, velit quam ultrices quam, sit amet fringilla leo sem vel nunc. Mauris in lacinia lacus.&lt;/p&gt;
&lt;p&gt;Suspendisse a tincidunt lacus. Curabitur at urna sagittis, dictum ante sit amet, euismod magna. Sed rutrum massa id tortor commodo, vitae elementum turpis tempus. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean purus turpis, venenatis a ullamcorper nec, tincidunt et massa. Integer posuere quam rutrum arcu vehicula imperdiet. Mauris ullamcorper quam vitae purus congue, quis euismod magna eleifend. Vestibulum semper vel augue eget tincidunt. Fusce eget justo sodales, dapibus odio eu, ultrices lorem. Duis condimentum lorem id eros commodo, in facilisis mauris scelerisque. Morbi sed auctor leo. Nullam volutpat a lacus quis pharetra. Nulla congue rutrum magna a ornare.&lt;/p&gt;
&lt;p&gt;Aliquam in turpis accumsan, malesuada nibh ut, hendrerit justo. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Quisque sed erat nec justo posuere suscipit. Donec ut efficitur arcu, in malesuada neque. Nunc dignissim nisl massa, id vulputate nunc pretium nec. Quisque eget urna in risus suscipit ultricies. Pellentesque odio odio, tincidunt in eleifend sed, posuere a diam. Nam gravida nisl convallis semper elementum. Morbi vitae felis faucibus, vulputate orci placerat, aliquet nisi. Aliquam erat volutpat. Maecenas sagittis pulvinar purus, sed porta quam laoreet at.&lt;/p&gt;</description></item><item><title>Example Research</title><link>https://mode-demo.github.io/research/example2/</link><pubDate>Wed, 27 Apr 2016 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/research/example2/</guid><description>&lt;p&gt;Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis posuere tellus ac convallis placerat. Proin tincidunt magna sed ex sollicitudin condimentum. Sed ac faucibus dolor, scelerisque sollicitudin nisi. Cras purus urna, suscipit quis sapien eu, pulvinar tempor diam. Quisque risus orci, mollis id ante sit amet, gravida egestas nisl. Sed ac tempus magna. Proin in dui enim. Donec condimentum, sem id dapibus fringilla, tellus enim condimentum arcu, nec volutpat est felis vel metus. Vestibulum sit amet erat at nulla eleifend gravida.&lt;/p&gt;
&lt;p&gt;Nullam vel molestie justo. Curabitur vitae efficitur leo. In hac habitasse platea dictumst. Sed pulvinar mauris dui, eget varius purus congue ac. Nulla euismod, lorem vel elementum dapibus, nunc justo porta mi, sed tempus est est vel tellus. Nam et enim eleifend, laoreet sem sit amet, elementum sem. Morbi ut leo congue, maximus velit ut, finibus arcu. In et libero cursus, rutrum risus non, molestie leo. Nullam congue quam et volutpat malesuada. Sed risus tortor, pulvinar et dictum nec, sodales non mi. Phasellus lacinia commodo laoreet. Nam mollis, erat in feugiat consectetur, purus eros egestas tellus, in auctor urna odio at nibh. Mauris imperdiet nisi ac magna convallis, at rhoncus ligula cursus.&lt;/p&gt;
&lt;p&gt;Cras aliquam rhoncus ipsum, in hendrerit nunc mattis vitae. Duis vitae efficitur metus, ac tempus leo. Cras nec fringilla lacus. Quisque sit amet risus at ipsum pharetra commodo. Sed aliquam mauris at consequat eleifend. Praesent porta, augue sed viverra bibendum, neque ante euismod ante, in vehicula justo lorem ac eros. Suspendisse augue libero, venenatis eget tincidunt ut, malesuada at lorem. Donec vitae bibendum arcu. Aenean maximus nulla non pretium iaculis. Quisque imperdiet, nulla in pulvinar aliquet, velit quam ultrices quam, sit amet fringilla leo sem vel nunc. Mauris in lacinia lacus.&lt;/p&gt;
&lt;p&gt;Suspendisse a tincidunt lacus. Curabitur at urna sagittis, dictum ante sit amet, euismod magna. Sed rutrum massa id tortor commodo, vitae elementum turpis tempus. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean purus turpis, venenatis a ullamcorper nec, tincidunt et massa. Integer posuere quam rutrum arcu vehicula imperdiet. Mauris ullamcorper quam vitae purus congue, quis euismod magna eleifend. Vestibulum semper vel augue eget tincidunt. Fusce eget justo sodales, dapibus odio eu, ultrices lorem. Duis condimentum lorem id eros commodo, in facilisis mauris scelerisque. Morbi sed auctor leo. Nullam volutpat a lacus quis pharetra. Nulla congue rutrum magna a ornare.&lt;/p&gt;
&lt;p&gt;Aliquam in turpis accumsan, malesuada nibh ut, hendrerit justo. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Quisque sed erat nec justo posuere suscipit. Donec ut efficitur arcu, in malesuada neque. Nunc dignissim nisl massa, id vulputate nunc pretium nec. Quisque eget urna in risus suscipit ultricies. Pellentesque odio odio, tincidunt in eleifend sed, posuere a diam. Nam gravida nisl convallis semper elementum. Morbi vitae felis faucibus, vulputate orci placerat, aliquet nisi. Aliquam erat volutpat. Maecenas sagittis pulvinar purus, sed porta quam laoreet at.&lt;/p&gt;</description></item><item><title>Example Research</title><link>https://mode-demo.github.io/research/example3/</link><pubDate>Wed, 27 Apr 2016 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/research/example3/</guid><description>&lt;p&gt;Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis posuere tellus ac convallis placerat. Proin tincidunt magna sed ex sollicitudin condimentum. Sed ac faucibus dolor, scelerisque sollicitudin nisi. Cras purus urna, suscipit quis sapien eu, pulvinar tempor diam. Quisque risus orci, mollis id ante sit amet, gravida egestas nisl. Sed ac tempus magna. Proin in dui enim. Donec condimentum, sem id dapibus fringilla, tellus enim condimentum arcu, nec volutpat est felis vel metus. Vestibulum sit amet erat at nulla eleifend gravida.&lt;/p&gt;
&lt;p&gt;Nullam vel molestie justo. Curabitur vitae efficitur leo. In hac habitasse platea dictumst. Sed pulvinar mauris dui, eget varius purus congue ac. Nulla euismod, lorem vel elementum dapibus, nunc justo porta mi, sed tempus est est vel tellus. Nam et enim eleifend, laoreet sem sit amet, elementum sem. Morbi ut leo congue, maximus velit ut, finibus arcu. In et libero cursus, rutrum risus non, molestie leo. Nullam congue quam et volutpat malesuada. Sed risus tortor, pulvinar et dictum nec, sodales non mi. Phasellus lacinia commodo laoreet. Nam mollis, erat in feugiat consectetur, purus eros egestas tellus, in auctor urna odio at nibh. Mauris imperdiet nisi ac magna convallis, at rhoncus ligula cursus.&lt;/p&gt;
&lt;p&gt;Cras aliquam rhoncus ipsum, in hendrerit nunc mattis vitae. Duis vitae efficitur metus, ac tempus leo. Cras nec fringilla lacus. Quisque sit amet risus at ipsum pharetra commodo. Sed aliquam mauris at consequat eleifend. Praesent porta, augue sed viverra bibendum, neque ante euismod ante, in vehicula justo lorem ac eros. Suspendisse augue libero, venenatis eget tincidunt ut, malesuada at lorem. Donec vitae bibendum arcu. Aenean maximus nulla non pretium iaculis. Quisque imperdiet, nulla in pulvinar aliquet, velit quam ultrices quam, sit amet fringilla leo sem vel nunc. Mauris in lacinia lacus.&lt;/p&gt;
&lt;p&gt;Suspendisse a tincidunt lacus. Curabitur at urna sagittis, dictum ante sit amet, euismod magna. Sed rutrum massa id tortor commodo, vitae elementum turpis tempus. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean purus turpis, venenatis a ullamcorper nec, tincidunt et massa. Integer posuere quam rutrum arcu vehicula imperdiet. Mauris ullamcorper quam vitae purus congue, quis euismod magna eleifend. Vestibulum semper vel augue eget tincidunt. Fusce eget justo sodales, dapibus odio eu, ultrices lorem. Duis condimentum lorem id eros commodo, in facilisis mauris scelerisque. Morbi sed auctor leo. Nullam volutpat a lacus quis pharetra. Nulla congue rutrum magna a ornare.&lt;/p&gt;
&lt;p&gt;Aliquam in turpis accumsan, malesuada nibh ut, hendrerit justo. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Quisque sed erat nec justo posuere suscipit. Donec ut efficitur arcu, in malesuada neque. Nunc dignissim nisl massa, id vulputate nunc pretium nec. Quisque eget urna in risus suscipit ultricies. Pellentesque odio odio, tincidunt in eleifend sed, posuere a diam. Nam gravida nisl convallis semper elementum. Morbi vitae felis faucibus, vulputate orci placerat, aliquet nisi. Aliquam erat volutpat. Maecenas sagittis pulvinar purus, sed porta quam laoreet at.&lt;/p&gt;</description></item><item><title>Approximating Likelihood Ratios with Calibrated Discriminative Classifiers</title><link>https://mode-demo.github.io/publication/cranmer-2016-approximatinglikelihoodratioscalibrated/</link><pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/publication/cranmer-2016-approximatinglikelihoodratioscalibrated/</guid><description/></item><item><title>dasdas</title><link>https://mode-demo.github.io/members/dasdas/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/members/dasdas/</guid><description/></item><item><title>Submit Profile</title><link>https://mode-demo.github.io/submit/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://mode-demo.github.io/submit/</guid><description>&lt;div class="submit-profile-page"&gt;
&lt;p&gt;Use this form to submit your profile. A draft pull request will be opened automatically for maintainers to review.&lt;/p&gt;
&lt;form id="submit-profile-form"&gt;
&lt;label for="name"&gt;Name *&lt;/label&gt;
&lt;input id="name" name="name" type="text" required /&gt;
&lt;pre&gt;&lt;code&gt;&amp;lt;label for=&amp;quot;role&amp;quot;&amp;gt;Role *&amp;lt;/label&amp;gt;
&amp;lt;input id=&amp;quot;role&amp;quot; name=&amp;quot;role&amp;quot; type=&amp;quot;text&amp;quot; required /&amp;gt;
&amp;lt;label for=&amp;quot;affiliation&amp;quot;&amp;gt;Affiliation *&amp;lt;/label&amp;gt;
&amp;lt;input id=&amp;quot;affiliation&amp;quot; name=&amp;quot;affiliation&amp;quot; type=&amp;quot;text&amp;quot; required /&amp;gt;
&amp;lt;label for=&amp;quot;bio&amp;quot;&amp;gt;Bio *&amp;lt;/label&amp;gt;
&amp;lt;textarea id=&amp;quot;bio&amp;quot; name=&amp;quot;bio&amp;quot; rows=&amp;quot;4&amp;quot; required&amp;gt;&amp;lt;/textarea&amp;gt;
&amp;lt;label for=&amp;quot;email&amp;quot;&amp;gt;Email *&amp;lt;/label&amp;gt;
&amp;lt;input id=&amp;quot;email&amp;quot; name=&amp;quot;email&amp;quot; type=&amp;quot;email&amp;quot; required /&amp;gt;
&amp;lt;label for=&amp;quot;social_links&amp;quot;&amp;gt;Social links (comma-separated URLs)&amp;lt;/label&amp;gt;
&amp;lt;input id=&amp;quot;social_links&amp;quot; name=&amp;quot;social_links&amp;quot; type=&amp;quot;text&amp;quot; placeholder=&amp;quot;https://github.com/yourname, https://www.linkedin.com/in/yourname&amp;quot; /&amp;gt;
&amp;lt;label for=&amp;quot;avatar&amp;quot;&amp;gt;Avatar image (jpg/png) *&amp;lt;/label&amp;gt;
&amp;lt;input id=&amp;quot;avatar&amp;quot; name=&amp;quot;avatar&amp;quot; type=&amp;quot;file&amp;quot; accept=&amp;quot;image/*&amp;quot; required /&amp;gt;
&amp;lt;label for=&amp;quot;access_key&amp;quot;&amp;gt;Access key *&amp;lt;/label&amp;gt;
&amp;lt;input id=&amp;quot;access_key&amp;quot; name=&amp;quot;access_key&amp;quot; type=&amp;quot;password&amp;quot; required /&amp;gt;
&amp;lt;button type=&amp;quot;submit&amp;quot; id=&amp;quot;submit-btn&amp;quot;&amp;gt;Submit&amp;lt;/button&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;/form&gt;
&lt;p id="submit-status" aria-live="polite"&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;script&gt;
const form = document.getElementById('submit-profile-form');
const statusEl = document.getElementById('submit-status');
const submitBtn = document.getElementById('submit-btn');
function fileToDataUrl(file) {
return new Promise((resolve, reject) =&gt; {
const reader = new FileReader();
reader.onload = () =&gt; resolve(reader.result);
reader.onerror = () =&gt; reject(new Error('Unable to read avatar file.'));
reader.readAsDataURL(file);
});
}
form.addEventListener('submit', async (event) =&gt; {
event.preventDefault();
statusEl.textContent = 'Submitting profile...';
submitBtn.disabled = true;
try {
const avatarFile = document.getElementById('avatar').files[0];
if (!avatarFile) {
throw new Error('Avatar image is required.');
}
const avatarDataUrl = await fileToDataUrl(avatarFile);
const payload = {
name: document.getElementById('name').value.trim(),
role: document.getElementById('role').value.trim(),
affiliation: document.getElementById('affiliation').value.trim(),
bio: document.getElementById('bio').value.trim(),
email: document.getElementById('email').value.trim(),
socialLinks: document
.getElementById('social_links')
.value
.split(',')
.map((link) =&gt; link.trim())
.filter(Boolean),
accessKey: document.getElementById('access_key').value,
avatarDataUrl,
avatarFilename: avatarFile.name,
avatarMimeType: avatarFile.type
};
const response = await fetch('https://mode-collaboration-github-io.vercel.app/api/submit-profile', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify(payload)
});
const result = await response.json();
if (!response.ok) {
throw new Error(result.error || 'Submission failed.');
}
statusEl.innerHTML = `Submitted successfully. Draft PR: &lt;a href="${result.pullRequestUrl}" target="_blank" rel="noopener"&gt;${result.pullRequestUrl}&lt;/a&gt;`;
form.reset();
} catch (error) {
statusEl.textContent = error.message || 'Submission failed.';
} finally {
submitBtn.disabled = false;
}
});
&lt;/script&gt;
&lt;style&gt;
.submit-profile-page {
max-width: 740px;
}
#submit-profile-form {
display: grid;
gap: 0.75rem;
}
#submit-profile-form input,
#submit-profile-form textarea,
#submit-profile-form button {
font: inherit;
padding: 0.6rem;
}
#submit-profile-form button {
width: fit-content;
cursor: pointer;
}
#submit-status {
margin-top: 1rem;
font-weight: 600;
}
&lt;/style&gt;</description></item></channel></rss>