Skip to content

Commit 2f861b7

Browse files
committed
readme improvements
1 parent 22079cf commit 2f861b7

File tree

1 file changed

+16
-14
lines changed

1 file changed

+16
-14
lines changed

README.md

Lines changed: 16 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -114,7 +114,7 @@ list_famous_composers(20)
114114

115115
(Shameless ad: if classical music is your thing, I built a [GPT-automated website](https://github.com/Zulko/composer-timelines) on top of this function and a few others powered by ChatGPT)
116116

117-
Functions defined with the decorator can also have multiple arguments and keyword arguments:
117+
A `gpt_function`-decorated method can also have multiple arguments and keyword arguments:
118118

119119
```python
120120
from gpt_function_decorator import gpt_function
@@ -131,6 +131,7 @@ synonym("man", tone="academic") # returns "individual"
131131
```
132132

133133
Putting everything together in this example:
134+
134135
```python
135136
@gpt_function
136137
def find_words_in_text(text, categories, limit=3) -> list[str]:
@@ -146,43 +147,44 @@ find_words_in_text(text, categories=["animal", "food"])
146147

147148
### Advanced output formatting
148149

149-
You can provide any simple output format directly (`-> int`, `-> float`, etc.). Lists should always declare the element type (for instance `list[str]`).
150+
You can provide any simple output format directly in the function signature with `-> int`, `-> float`, etc. Lists should always declare the element type (for instance `list[str]`).
150151

151-
The OpenAI API doesn't seem to like types like `tuple` too much, and will complain if you have a type like `Dict` but don't specify the keys. If you really want to specify a `Dict` output with minimal boilerplate you can use the `TypedDict`:
152+
The OpenAI API doesn't seem to like types like `tuple` too much, and will refuse a `Dict` type as it doesn't know what key names to use. If To specify a `Dict` output with minimal boilerplate you can use the `TypedDict`:
152153

153154
```python
154-
from typing_extensions import TypedDict # or just typing, for Python>=3.12
155+
from typing_extensions import TypedDict # or just "typing", for Python>=3.12
155156

156157
@gpt_function
157-
def first_us_presidents(n) -> list[TypedDict("i", dict(birth_year=int, name=str))]:
158+
def first_us_presidents(n) -> list[TypedDict("i", dict(birth=int, name=str))]:
158159
"""Return the {n} first US presidents with their birth year"""
159160

160161
first_us_presidents(3)
161-
# [{'year': 1732, 'name': 'George Washington'},
162-
# {'year': 1735, 'name': 'John Adams'},
163-
# {'year': 1751, 'name': 'Thomas Jefferson'}]
162+
# [{'birth': 1732, 'name': 'George Washington'},
163+
# {'birth': 1735, 'name': 'John Adams'},
164+
# {'birth': 1751, 'name': 'Thomas Jefferson'}]
164165
```
165166

166-
But really the cleanest (and OpenAI-officially-supported) way is to provide a Pydantic model:
167+
But really the cleanest way (also officially supported by OpenAI) is to provide a Pydantic model as type:
167168

168169
```python
169170
from pydantic import BaseModel
170171

171172
class USPresident(BaseModel):
173+
birth: int
172174
name: str
173-
birth_year: int
175+
174176

175177
@gpt_function
176178
def first_us_presidents(n) -> list[USPresident]:
177179
"""Return the {n} first US presidents with their birth year"""
178180

179181
first_us_presidents(3)
180-
# [President(name='George Washington', birth_year=1732),
181-
# President(name='John Adams', birth_year=1735),
182-
# President(name='Thomas Jefferson', birth_year=1743)]
182+
# [USPresident(birth=1732, name='George Washington'),
183+
# USPresident(birth=1735, name='John Adams'),
184+
# USPresident(birth=1743, name='Thomas Jefferson')]
183185
```
184186

185-
With Pydantic models you can have output schemas as nested and complex as you like (see [the docs](https://cookbook.openai.com/examples/structured_outputs_intro)), although it seems that the more difficult you'll make it for the GPT to understand how to fill the schema, the longer it's take.
187+
With Pydantic models you can have output schemas as nested and complex as you like (see [the docs](https://cookbook.openai.com/examples/structured_outputs_intro)), although it seems that the more difficult you'll make it for the GPT to understand how to fill the schema, the longer it will take (not sure about costs).
186188

187189
### Using `gpt_function` on class methods
188190

0 commit comments

Comments
 (0)