Skip to content

Commit 18e0b36

Browse files
release: 1.92.0 (#2424)
* chore(tests): skip some failing tests on the latest python versions * chore(internal): add tests for breaking change detection * move over parse and stream methods out of beta * update docs * update tests * remove old beta files * fix relative import * fix(ci): release-doctor — report correct token name * feat(api): webhook and deep research support * release: 1.92.0 --------- Co-authored-by: stainless-app[bot] <142633134+stainless-app[bot]@users.noreply.github.com> Co-authored-by: David Meadows <dmeadows@stainless.com>
1 parent 0673da6 commit 18e0b36

File tree

66 files changed

+2380
-1051
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

66 files changed

+2380
-1051
lines changed

.release-please-manifest.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
{
2-
".": "1.91.0"
2+
".": "1.92.0"
33
}

.stats.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
configured_endpoints: 111
2-
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-ef4ecb19eb61e24c49d77fef769ee243e5279bc0bdbaee8d0f8dba4da8722559.yml
3-
openapi_spec_hash: 1b8a9767c9f04e6865b06c41948cdc24
4-
config_hash: fd2af1d5eff0995bb7dc02ac9a34851d
2+
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-cca460eaf5cc13e9d6e5293eb97aac53d66dc1385c691f74b768c97d165b6e8b.yml
3+
openapi_spec_hash: 9ec43d443b3dd58ca5aa87eb0a7eb49f
4+
config_hash: e74d6791681e3af1b548748ff47a22c2

CHANGELOG.md

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,25 @@
11
# Changelog
22

3+
## 1.92.0 (2025-06-26)
4+
5+
Full Changelog: [v1.91.0...v1.92.0](https://github.com/openai/openai-python/compare/v1.91.0...v1.92.0)
6+
7+
### Features
8+
9+
* **api:** webhook and deep research support ([d3bb116](https://github.com/openai/openai-python/commit/d3bb116f34f470502f902b88131deec43a953b12))
10+
* **client:** move stream and parse out of beta ([0e358ed](https://github.com/openai/openai-python/commit/0e358ed66b317038705fb38958a449d284f3cb88))
11+
12+
13+
### Bug Fixes
14+
15+
* **ci:** release-doctor — report correct token name ([ff8c556](https://github.com/openai/openai-python/commit/ff8c5561e44e8a0902732b5934c97299d2c98d4e))
16+
17+
18+
### Chores
19+
20+
* **internal:** add tests for breaking change detection ([710fe8f](https://github.com/openai/openai-python/commit/710fe8fd5f9e33730338341680152d3f2556dfa0))
21+
* **tests:** skip some failing tests on the latest python versions ([93ccc38](https://github.com/openai/openai-python/commit/93ccc38a8ef1575d77d33d031666d07d10e4af72))
22+
323
## 1.91.0 (2025-06-23)
424

525
Full Changelog: [v1.90.0...v1.91.0](https://github.com/openai/openai-python/compare/v1.90.0...v1.91.0)

README.md

Lines changed: 78 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -406,6 +406,84 @@ client.files.create(
406406

407407
The async client uses the exact same interface. If you pass a [`PathLike`](https://docs.python.org/3/library/os.html#os.PathLike) instance, the file contents will be read asynchronously automatically.
408408

409+
## Webhook Verification
410+
411+
Verifying webhook signatures is _optional but encouraged_.
412+
413+
### Parsing webhook payloads
414+
415+
For most use cases, you will likely want to verify the webhook and parse the payload at the same time. To achieve this, we provide the method `client.webhooks.unwrap()`, which parses a webhook request and verifies that it was sent by OpenAI. This method will raise an error if the signature is invalid.
416+
417+
Note that the `body` parameter must be the raw JSON string sent from the server (do not parse it first). The `.unwrap()` method will parse this JSON for you into an event object after verifying the webhook was sent from OpenAI.
418+
419+
```python
420+
from openai import OpenAI
421+
from flask import Flask, request
422+
423+
app = Flask(__name__)
424+
client = OpenAI() # OPENAI_WEBHOOK_SECRET environment variable is used by default
425+
426+
427+
@app.route("/webhook", methods=["POST"])
428+
def webhook():
429+
request_body = request.get_data(as_text=True)
430+
431+
try:
432+
event = client.webhooks.unwrap(request_body, request.headers)
433+
434+
if event.type == "response.completed":
435+
print("Response completed:", event.data)
436+
elif event.type == "response.failed":
437+
print("Response failed:", event.data)
438+
else:
439+
print("Unhandled event type:", event.type)
440+
441+
return "ok"
442+
except Exception as e:
443+
print("Invalid signature:", e)
444+
return "Invalid signature", 400
445+
446+
447+
if __name__ == "__main__":
448+
app.run(port=8000)
449+
```
450+
451+
### Verifying webhook payloads directly
452+
453+
In some cases, you may want to verify the webhook separately from parsing the payload. If you prefer to handle these steps separately, we provide the method `client.webhooks.verify_signature()` to _only verify_ the signature of a webhook request. Like `.unwrap()`, this method will raise an error if the signature is invalid.
454+
455+
Note that the `body` parameter must be the raw JSON string sent from the server (do not parse it first). You will then need to parse the body after verifying the signature.
456+
457+
```python
458+
import json
459+
from openai import OpenAI
460+
from flask import Flask, request
461+
462+
app = Flask(__name__)
463+
client = OpenAI() # OPENAI_WEBHOOK_SECRET environment variable is used by default
464+
465+
466+
@app.route("/webhook", methods=["POST"])
467+
def webhook():
468+
request_body = request.get_data(as_text=True)
469+
470+
try:
471+
client.webhooks.verify_signature(request_body, request.headers)
472+
473+
# Parse the body after verification
474+
event = json.loads(request_body)
475+
print("Verified event:", event)
476+
477+
return "ok"
478+
except Exception as e:
479+
print("Invalid signature:", e)
480+
return "Invalid signature", 400
481+
482+
483+
if __name__ == "__main__":
484+
app.run(port=8000)
485+
```
486+
409487
## Handling errors
410488

411489
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `openai.APIConnectionError` is raised.

api.md

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -395,6 +395,35 @@ Methods:
395395
- <code>client.vector_stores.file_batches.<a href="./src/openai/resources/vector_stores/file_batches.py">poll</a>(\*args) -> VectorStoreFileBatch</code>
396396
- <code>client.vector_stores.file_batches.<a href="./src/openai/resources/vector_stores/file_batches.py">upload_and_poll</a>(\*args) -> VectorStoreFileBatch</code>
397397

398+
# Webhooks
399+
400+
Types:
401+
402+
```python
403+
from openai.types.webhooks import (
404+
BatchCancelledWebhookEvent,
405+
BatchCompletedWebhookEvent,
406+
BatchExpiredWebhookEvent,
407+
BatchFailedWebhookEvent,
408+
EvalRunCanceledWebhookEvent,
409+
EvalRunFailedWebhookEvent,
410+
EvalRunSucceededWebhookEvent,
411+
FineTuningJobCancelledWebhookEvent,
412+
FineTuningJobFailedWebhookEvent,
413+
FineTuningJobSucceededWebhookEvent,
414+
ResponseCancelledWebhookEvent,
415+
ResponseCompletedWebhookEvent,
416+
ResponseFailedWebhookEvent,
417+
ResponseIncompleteWebhookEvent,
418+
UnwrapWebhookEvent,
419+
)
420+
```
421+
422+
Methods:
423+
424+
- <code>client.webhooks.<a href="./src/openai/resources/webhooks.py">unwrap</a>(payload, headers, \*, secret) -> UnwrapWebhookEvent</code>
425+
- <code>client.webhooks.<a href="./src/openai/resources/webhooks.py">verify_signature</a>(payload, headers, \*, secret, tolerance) -> None</code>
426+
398427
# Beta
399428

400429
## Realtime
@@ -774,6 +803,7 @@ from openai.types.responses import (
774803
ResponseWebSearchCallSearchingEvent,
775804
Tool,
776805
ToolChoiceFunction,
806+
ToolChoiceMcp,
777807
ToolChoiceOptions,
778808
ToolChoiceTypes,
779809
WebSearchTool,

bin/check-release-environment

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ if [ -z "${STAINLESS_API_KEY}" ]; then
77
fi
88

99
if [ -z "${PYPI_TOKEN}" ]; then
10-
errors+=("The OPENAI_PYPI_TOKEN secret has not been set. Please set it in either this repository's secrets or your organization secrets.")
10+
errors+=("The PYPI_TOKEN secret has not been set. Please set it in either this repository's secrets or your organization secrets.")
1111
fi
1212

1313
lenErrors=${#errors[@]}

examples/parsing.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ class MathResponse(BaseModel):
1818

1919
client = OpenAI()
2020

21-
completion = client.beta.chat.completions.parse(
21+
completion = client.chat.completions.parse(
2222
model="gpt-4o-2024-08-06",
2323
messages=[
2424
{"role": "system", "content": "You are a helpful math tutor."},

examples/parsing_stream.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ class MathResponse(BaseModel):
1818

1919
client = OpenAI()
2020

21-
with client.beta.chat.completions.stream(
21+
with client.chat.completions.stream(
2222
model="gpt-4o-2024-08-06",
2323
messages=[
2424
{"role": "system", "content": "You are a helpful math tutor."},

examples/parsing_tools.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ class Query(BaseModel):
5757

5858
client = OpenAI()
5959

60-
completion = client.beta.chat.completions.parse(
60+
completion = client.chat.completions.parse(
6161
model="gpt-4o-2024-08-06",
6262
messages=[
6363
{

examples/parsing_tools_stream.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ class GetWeather(BaseModel):
1515
client = OpenAI()
1616

1717

18-
with client.beta.chat.completions.stream(
18+
with client.chat.completions.stream(
1919
model="gpt-4o-2024-08-06",
2020
messages=[
2121
{

helpers.md

Lines changed: 9 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
The OpenAI API supports extracting JSON from the model with the `response_format` request param, for more details on the API, see [this guide](https://platform.openai.com/docs/guides/structured-outputs).
44

5-
The SDK provides a `client.beta.chat.completions.parse()` method which is a wrapper over the `client.chat.completions.create()` that
5+
The SDK provides a `client.chat.completions.parse()` method which is a wrapper over the `client.chat.completions.create()` that
66
provides richer integrations with Python specific types & returns a `ParsedChatCompletion` object, which is a subclass of the standard `ChatCompletion` class.
77

88
## Auto-parsing response content with Pydantic models
@@ -24,7 +24,7 @@ class MathResponse(BaseModel):
2424
final_answer: str
2525

2626
client = OpenAI()
27-
completion = client.beta.chat.completions.parse(
27+
completion = client.chat.completions.parse(
2828
model="gpt-4o-2024-08-06",
2929
messages=[
3030
{"role": "system", "content": "You are a helpful math tutor."},
@@ -44,6 +44,7 @@ else:
4444
## Auto-parsing function tool calls
4545

4646
The `.parse()` method will also automatically parse `function` tool calls if:
47+
4748
- You use the `openai.pydantic_function_tool()` helper method
4849
- You mark your tool schema with `"strict": True`
4950

@@ -96,7 +97,7 @@ class Query(BaseModel):
9697
order_by: OrderBy
9798

9899
client = openai.OpenAI()
99-
completion = client.beta.chat.completions.parse(
100+
completion = client.chat.completions.parse(
100101
model="gpt-4o-2024-08-06",
101102
messages=[
102103
{
@@ -121,7 +122,7 @@ print(tool_call.function.parsed_arguments.table_name)
121122

122123
### Differences from `.create()`
123124

124-
The `beta.chat.completions.parse()` method imposes some additional restrictions on it's usage that `chat.completions.create()` does not.
125+
The `chat.completions.parse()` method imposes some additional restrictions on it's usage that `chat.completions.create()` does not.
125126

126127
- If the completion completes with `finish_reason` set to `length` or `content_filter`, the `LengthFinishReasonError` / `ContentFilterFinishReasonError` errors will be raised.
127128
- Only strict function tools can be passed, e.g. `{'type': 'function', 'function': {..., 'strict': True}}`
@@ -132,7 +133,7 @@ OpenAI supports streaming responses when interacting with the [Chat Completion](
132133

133134
## Chat Completions API
134135

135-
The SDK provides a `.beta.chat.completions.stream()` method that wraps the `.chat.completions.create(stream=True)` stream providing a more granular event API & automatic accumulation of each delta.
136+
The SDK provides a `.chat.completions.stream()` method that wraps the `.chat.completions.create(stream=True)` stream providing a more granular event API & automatic accumulation of each delta.
136137

137138
It also supports all aforementioned [parsing helpers](#structured-outputs-parsing-helpers).
138139

@@ -143,7 +144,7 @@ from openai import AsyncOpenAI
143144

144145
client = AsyncOpenAI()
145146

146-
async with client.beta.chat.completions.stream(
147+
async with client.chat.completions.stream(
147148
model='gpt-4o-2024-08-06',
148149
messages=[...],
149150
) as stream:
@@ -263,7 +264,7 @@ A handful of helper methods are provided on the stream class for additional conv
263264
Returns the accumulated `ParsedChatCompletion` object
264265

265266
```py
266-
async with client.beta.chat.completions.stream(...) as stream:
267+
async with client.chat.completions.stream(...) as stream:
267268
...
268269

269270
completion = await stream.get_final_completion()
@@ -275,7 +276,7 @@ print(completion.choices[0].message)
275276
If you want to wait for the stream to complete, you can use the `.until_done()` method.
276277

277278
```py
278-
async with client.beta.chat.completions.stream(...) as stream:
279+
async with client.chat.completions.stream(...) as stream:
279280
await stream.until_done()
280281
# stream is now finished
281282
```

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[project]
22
name = "openai"
3-
version = "1.91.0"
3+
version = "1.92.0"
44
description = "The official Python library for the openai API"
55
dynamic = ["readme"]
66
license = "Apache-2.0"

src/openai/__init__.py

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@
3030
LengthFinishReasonError,
3131
UnprocessableEntityError,
3232
APIResponseValidationError,
33+
InvalidWebhookSignatureError,
3334
ContentFilterFinishReasonError,
3435
)
3536
from ._base_client import DefaultHttpxClient, DefaultAioHttpClient, DefaultAsyncHttpxClient
@@ -62,6 +63,7 @@
6263
"InternalServerError",
6364
"LengthFinishReasonError",
6465
"ContentFilterFinishReasonError",
66+
"InvalidWebhookSignatureError",
6567
"Timeout",
6668
"RequestOptions",
6769
"Client",
@@ -121,6 +123,8 @@
121123

122124
project: str | None = None
123125

126+
webhook_secret: str | None = None
127+
124128
base_url: str | _httpx.URL | None = None
125129

126130
timeout: float | Timeout | None = DEFAULT_TIMEOUT
@@ -183,6 +187,17 @@ def project(self, value: str | None) -> None: # type: ignore
183187

184188
project = value
185189

190+
@property # type: ignore
191+
@override
192+
def webhook_secret(self) -> str | None:
193+
return webhook_secret
194+
195+
@webhook_secret.setter # type: ignore
196+
def webhook_secret(self, value: str | None) -> None: # type: ignore
197+
global webhook_secret
198+
199+
webhook_secret = value
200+
186201
@property
187202
@override
188203
def base_url(self) -> _httpx.URL:
@@ -335,6 +350,7 @@ def _load_client() -> OpenAI: # type: ignore[reportUnusedFunction]
335350
api_key=api_key,
336351
organization=organization,
337352
project=project,
353+
webhook_secret=webhook_secret,
338354
base_url=base_url,
339355
timeout=timeout,
340356
max_retries=max_retries,
@@ -363,6 +379,7 @@ def _reset_client() -> None: # type: ignore[reportUnusedFunction]
363379
models as models,
364380
batches as batches,
365381
uploads as uploads,
382+
webhooks as webhooks,
366383
responses as responses,
367384
containers as containers,
368385
embeddings as embeddings,

0 commit comments

Comments
 (0)