Automation for Photopolymer Optimization: Developments Toward Next-Generation Energy Materials

By Aldair E. Gongora and Johanna J. Schwartz, Lawrence Livermore National Laboratory

Photopolymer synthesis and formulations optimization are areas with huge design complexity. Factors such as chemical composition, additives and fillers, absorption properties, cure kinetics and so on dictate the overall performance and lifetime of the photopolymer toward end-use applications. For example, many photopolymer resins have large viscosities, making handling a pain and rendering R&D slow and manpower intensive. This slows progress, in comparison to low-viscosity systems where liquid handling automation and high-throughput screening platforms can be used. Fortunately, recent advancements in laboratory automation have begun demonstrating significant advancements in fields such as biology, chemistry and materials science. 1,2 Ultimately, within this perspective, the authors hope to provide some context into their ongoing efforts to advance photopolymer screening and optimization through development of novel high-viscosity laboratory automation systems.

Tackling a battery-material problem

To access viscosity spaces not accessible to liquid-handling automation platforms, the authors took inspiration from additive manufacturing systems like Direct-Ink-Write (DIW), which can handle viscosities in the millions of cP regime. Recent advances in active-mixing DIW enable compositional control that can mix, deposit and cure photopolymer resins all-at-once. In the authors’ invented Studying-Polymers-On a-Chip (SPOC) platform, it was taken further and included lab automation and on-machine characterization systems to optimize formulations for targeted performance metrics in real-time (see Figure 1). 3

Figure 1. a) Schematic of the automated Studying-Polymers-On a-Chip (SPOC) high-throughput screening platform, combining (1) an active-mixing direct-ink-write (DIW) printer for multimaterial screening, (2) an in-situ characterization substrate array and (3) a living database for machine learning-driven automation and feedback. b) The three major steps in the automated process are to first deposit the polymer electrolyte photoresin onto the PCB, second to photopolymerize and cure the solid polymer electrolytes using 405 nm light and lastly take direct impedance measurements. All steps are pre-programmed using Python scripts. A solid sample after curing, and an example raw EIS spectra also are shown.
Figure 1. a) Schematic of the automated Studying-Polymers-On a-Chip (SPOC) high-throughput screening platform, combining (1) an active-mixing direct-ink-write (DIW) printer for multimaterial screening, (2) an in-situ characterization substrate array and (3) a living database for machine learning-driven automation and feedback. b) The three major steps in the automated process are to first deposit the polymer electrolyte photoresin onto the PCB, second to photopolymerize and cure the solid polymer electrolytes using 405 nm light and lastly take direct impedance measurements. All steps are pre-programmed using Python scripts. A solid sample after curing, and an example raw EIS spectra also are shown.

To exemplify the SPOC system, the authors first targeted a critical problem in battery materials that has relevance in society. The overall safety, stability and lifetime of Li-ion and Na-ion battery materials is dependent on the electrolyte. Liquid electrolytes have good ionic conductivity but have potential for leakage and thermal runway. Polymer electrolytes based on photo-crosslinkable formulations are enticing for their processability, mechanical stability and modularity. However, their ionic conductivities are much lower than liquid electrolyte systems. This looked like a perfect parameter space (see Figure 2) to target SPOC automation for accelerating polymer electrolyte advancement in terms of ionic conductivity, the first battery material bottleneck.

Figure 2. Target photoresins for the SPOC study were comprised of a 9:1 weight ratio of PEGMEA:PEGDA, an alkali salt (LiTFSI between 0-30 wt% or NaTFSI between 0-20 wt%) and a passive inorganic filler (various SiO2 particles, Al2O3 or TiO2 of 0-15 wt% and 0-10 wt% for LiTFSI and NaTFSI, respectively). All resins were prepared with 0.1 wt% Irgacure 819 as the photoinitiator for crosslinking using 405 nm irradiation. More info can be found at DOI 10.21203/rs.3.rs-6164887/v1.
Figure 2. Target photoresins for the SPOC study were comprised of a 9:1 weight ratio of PEGMEA:PEGDA, an alkali salt (LiTFSI between 0-30 wt% or NaTFSI between 0-20 wt%) and a passive inorganic filler (various SiO2 particles, Al2O3 or TiO2 of 0-15 wt% and 0-10 wt% for LiTFSI and NaTFSI, respectively). All resins were prepared with 0.1 wt% Irgacure 819 as the photoinitiator for crosslinking using 405 nm irradiation. More info can be found at DOI 10.21203/rs.3.rs-6164887/v1.

In the SPOC system, the authors designed an automated experimental workflow to optimize the Li-ion and Na-ion polymer electrolyte battery materials. As shown in Figure 1,
this involved, 1) creating the resin design space, which consisted of two resin input compositions that were mixed at varying ratios and deposited and cured for ionic conductivity measurements, 2) taking ionic conductivity measurements using a custom PCB electrode substrate array, which fed data directly into 3) a living, evolving database of comparable automated screening data. The curation of this database also was done using automated back-end techniques in data engineering, which emphasized the importance of integrated effective experiment-computer communication for data extraction, generation and analysis. Together, both the automated experimental arm of SPOC in tandem with the automated data pipeline computation represent an inherent and exemplar workflow for future photopolymer material advancement.

Lessons learned for advancement

Looking forward, the authors hope to use this growing database and experimental toolset to pave the way toward full lab autonomy for these photopolymer-based energy material systems, which would integrate machine learning feedback and predictive decision making.

The above SPOC system 4 targeted materials formulation and characterization at the same time but not accessing true synthesis, as the number of inlets is limited compared to traditional liquid-handling systems. For future automated labs, integration between synthetic efforts, such as the A-Lab at Lawrence Berkeley National Lab 5 and PolyBot at Argonne National Lab, 6,7 scale-up of down-selected syntheses for formulations screening (such as in methods like SPOC) is fundamentally needed. Furthermore, more long-term and complex characterizations beyond measurement of just one initial performance property also will be essential, co-optimizing performance metrics with other factors, such as cost, compatibility, scalability and material aging lifetimes. This multi-length scale effort is complex in and of itself, even without target application, and requires development in multiple areas including instrument automation from various vendors, integration between automated instruments, quality control and assurance of data streams from instruments, and appropriate problem formulation.

Taking a step back, the example system of polymer electrolytes offers three learning lessons with key examples that could be useful for broad photopolymer and energy material advancement:

Figure 3. a) Illustration of a high-throughput experimentation (HTE) platform featuring a robotic arm coordinating sample handling and enabling seamless integration across synthesis, formulation and characterization modules for applications in photopolymer optimization. b) Illustration of a digital laboratory infrastructure, highlighting data analytics, centralized storage and data pipelines that enable adaptive and real-time data capture. c) Illustration of Human-in-the-Loop, showcasing a multidisciplinary team contributing domain expertise across disciplines to co-develop and operate automated experimentation platforms. Illustrations created in collaboration with OpenAI’s ChatGPT-4o model.
Figure 3. a) Illustration of a high-throughput experimentation (HTE) platform featuring a robotic arm coordinating sample handling and enabling seamless integration across synthesis, formulation and characterization modules for applications in photopolymer optimization. b) Illustration of a digital laboratory infrastructure, highlighting data analytics, centralized storage and data pipelines that enable adaptive and real-time data capture. c) Illustration of Human-in-the-Loop, showcasing a multidisciplinary team contributing domain expertise across disciplines to co-develop and operate automated experimentation platforms. Illustrations created in collaboration with OpenAI’s ChatGPT-4o model.

Lesson 1: High throughput experimentation (HTE) provides a means of potential acceleration through reducing labor and time, while increasing repeatability and data throughput (see Figure 3a). However, the sheer vastness of the parameter space poses numerous challenges without something to aid in experimental selection and design. To overcome these challenges, HTE platforms can incorporate robotics as foundational infrastructure for expanded automation. Robotic arms can be vital in enhancing connectivity; enable precise, repeatable sample handling; and aid in bridging physically disconnected components and instruments in experimental workflows. Popular examples of enhancing robotic arms currently used in laboratory experimentation are collaborative robotics, which provide various means of integration and have safety features to work in shared environments with humans. By facilitating the integration of automated instruments and equipment using robotic arms, greater spatial coordination can be achieved in laboratory environments to connect efforts in synthesis, formulation and characterization.

Lesson 2: Integrating a comprehensive digital infrastructure – encompassing instrument control, automated data collection, centralized storage and real-time analytics – with HTE fundamentally can reshape discovery in fields such as photopolymer research (see Figure 3b). By establishing seamless software pipelines across the entire experimental lifecycle, researchers can move beyond brute-force screening and toward intelligent and adaptive exploration in complex design spaces, such as those in the photopolymer design space. This level of integration and infrastructure in HTE systems can enable streamlining operations in experimental workflows and enable the application of machine learning and optimization algorithms to iteratively refine hypotheses and target high-performing regions to maximize information gain. For example, recent developments in large language models (LLMs) and autonomous AI agents can extend this digital transformation even further by interfacing these tools with HTE systems. These tools can assist in data parsing, interpretation and even experimental planning, effectively augmenting human decision-making. Furthermore, LLMs also can provide a means of integrating heterogeneous sources of prior knowledge – including published literature, historical datasets and subject matter expertise – into the design and execution of new experiments. This creates an opportunity for transfer learning across domains, where information including composition, processing conditions and performance metrics readily can be generalized, shared and applied to new systems with limited or no existing preliminary data. In this way, the convergence of HTE, integrated software and AI-driven analytics offers a path toward cumulative, knowledge-guided discovery – where each experiment informs the next, and where the scientific process becomes both faster and more informed.

Lesson 3: Humans remain essential to the development and implementation of modern laboratory automation systems (see Figure 3c). While automation and machine learning enable unprecedented throughput, precision and adaptability in experimental workflows, these capabilities rely on careful design, coordination and oversight by multidisciplinary teams with significant subject matter expertise. A representative example of this collaborative approach is illustrated in the composition of the SPOC team. The development of SPOC was driven by a team comprising data scientists, polymer chemists, hardware engineers, material scientists, machine learning researchers, electrochemists and electrical engineers. The composition of the SPOC team enabled unique insights and technical capabilities from each discipline. For example, polymer chemists defined the experimental workflow to explore the design space, data scientists and machine learning experts formulated the data infrastructure to best frame the intelligent exploration of the large parameter space, and hardware and electrical engineering experts design, build and refine the automation and hardware aspects to enable robust experimentation. Crucially, these perspectives and contributions allowed for the co-development and co-design of the SPOC system and avoided pitfalls from siloed approaches. Resultingly, SPOC now is a tool for broad polymer, composite, electrochemical and energy materials research, continually expanding its capabilities through close interdisciplinary collaboration. This example underscores the importance of diverse, human-driven teams in shaping the future of automated science. Laboratory automation, when grounded in deep domain knowledge and collaborative engineering, offers a transformative approach to accelerating discovery – but its success depends on the thoughtful orchestration of both human and machine intelligence.

Challenges remain

While the promise of laboratory automation lies in its ability to accelerate science and discovery, exemplified through these three lessons learned, realizing this vision in practice faces significant challenges. First, developing and realizing HTE systems can be non-trivial and costly. Laboratory hardware often is expensive, proprietary and difficult to adapt or scale. Moreover, developing custom solutions for laboratory experiments also requires significant time and resources for prototyping, testing and iteration. Developing platforms, such as SPOC, require substantial engineering expertise, system integration and a deep understanding of the scientific experimental context. The reality is that many automated systems still are bespoke, limiting reproducibility and broader adoption across laboratories.

Second, data and digital engineering pose their own set of complexities. To capitalize and leverage automated experimentation efforts, automated data and digital engineering also rely on automated methods that require generation, curation and utilization of high-quality, structured data. However, this becomes challenging and difficult when instrumentation and software are closed or incompatible. Questions remain around what data should be collected, how it should be standardized, and what digital infrastructure is best suited for long-term system evolution. The lack of open, interoperable platforms continues to be a barrier to scalable and reusable solutions in scientific automation.

Third, and perhaps most importantly, is the human dimension. Building and sustaining teams that span chemistry, materials science, engineering, computer science, data analytics and more is a nontrivial endeavor. Coordinated efforts and time are required for researchers from different domains to develop shared language, common goals and mutual understanding for the development of solutions that require multi-disciplinary approaches for sustained success. The development of truly interdisciplinary teams requires not only technical alignment but also cultural and communicative fluency. Addressing these challenges will be essential for the next generation of automated research systems, and the next generation of photopolymer materials researchers. Future progress will depend as much on lowering the barriers to entry and fostering open, collaborative ecosystems as it will on advancing algorithms or hardware. As these systems continue to be built, thoughtful integration of people, platforms and processes must remain at the forefront. As with all collaborative science, communication is key!

Ultimately, wavelength and light intensity modulation are some of the most accessible stimuli controls for integration with automated screening systems. As such, the authors see exciting possibilities in the future for autonomous laboratories dedicated to photopolymer advancement and look forward to the potential to speed up advancement for direct market impact in a myriad of applications.

This work was performed under the auspices of the US Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and funded through the Laboratory Directed Research and Development (LDRD) program (22-LW-006 and 25-ERD-032). LLNL-JRNL-2004959.

References

  1. Tom, G. et al. Self-Driving Laboratories for Chemistry and Materials Science. Rev. (2024).
  2. Abolhasani, M. & Kumacheva, E. The rise of self-driving labs in chemical and materials sciences. Nature Synthesis 2, 483–492 (2023).
  3. Schwartz, J. J., Wood, M., Ye, J. Jaycox, A. W., Zhong, X., Lawrence Livermore National Laboratory, 2022. High Throughput Materials Screening.S. Patent Application 17/932,723.
  4. Jimenez, J. C. et al. Towards High-Throughput Materials Advancement: Thinking About Database Management in Our Studying-Polymers-on-a-Chip (SPOC) Platform. in TMS 2025 154th Annual Meeting & Exhibition Supplemental Proceedings 1253–1266 (Springer Nature Switzerland, Cham, 2025).
  5. Szymanski, N. J. et al. An autonomous laboratory for the accelerated synthesis of novel materials. Nature 624, 86–91 (2023).
  6. Vescovi, R. et al. Towards a modular architecture for science factories. Digital Discovery 2, 1980–1998 (2023).
  7. Vriza, A., Chan, H. & Xu, J. Self-Driving Laboratory for Polymer Electronics. Mater. 35, 3046–3056 (2023).
  8. Studying Polymers-On a-Chip (SPOC): High-throughput screening of polymers for battery applications, Corresponding author J. Schwartz, Lawrence Livermore National Laboratory