Inference-Aware Prompt Optimization for Aligning Black-Box Large Language Models

Saaduddin Mahmud · Mason Nakamura · Kyle H. Wray · Shlomo Zilberstein

Manning College of Information and Computer Sciences · University of Massachusetts Amherst

Download main paper (PDF) · Download supplementary (PDF) · Code (GitHub)

Abstract

Prompt optimization methods have demonstrated significant effectiveness in aligning black-box large language models (LLMs). In parallel, inference scaling strategies such as Best-of-N Sampling and Majority Voting have likewise been shown to improve alignment and performance by trading additional computation for better output. However, existing prompt optimization approaches are inference strategy agnostic; that is, they optimize prompts without regard to the inference strategy. This constitutes a significant methodological gap, as our empirical and theoretical analysis reveals a strong interdependence between these two paradigms. Moreover, we find that user preferences regarding trade-offs among multiple objectives and inference budgets substantially influence the choice of prompt and inference configuration.

To address this gap, we introduce a unified framework named IAPO (Inference-Aware Prompt Optimization) that jointly optimizes the prompt and inference scale, while being aware of the inference budget and different task objectives. We then develop a fixed-budget training algorithm for IAPO, called PSST (Prompt Scaling via Sequential Trimming), and establish finite-budget guarantees on the error probability. Finally, we evaluate the effectiveness of PSST on six tasks, including multi-objective text generation and reasoning, and demonstrate the critical role of incorporating inference-awareness in aligning black-box LLMs using prompt optimization.

Overview Figure

Overview figure: inference-agnostic vs. inference-aware prompt optimization.
Inference-agnostic vs. inference-aware prompt optimization. The left side illustrates standard prompt optimization, which treats the inference strategy as fixed: a best prompt is selected during training and then used at inference with a predetermined number of samples, which can lead to misaligned outputs and high inference cost for some queries. The right side shows our inference-aware framework IAPO with the PSST algorithm, which conditions on user context such as budget and preferences, jointly selects the prompt and inference scale, and produces responses that better satisfy objectives and budget. Project page is available at https://iapo-aaai25.github.io/, and code is available at https://github.com/IAPO-AAAI25/IAPO-PSST.