Skip to content

Commit 416216c

Browse files
committed
remove makefile (distracting) prefer uv
1 parent b5bf8b3 commit 416216c

File tree

4 files changed

+55
-74
lines changed

4 files changed

+55
-74
lines changed

.github/workflows/ci.yml

Lines changed: 46 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,21 @@ on:
55
branches: [ main ]
66
pull_request:
77
branches: [ main ]
8+
workflow_dispatch:
9+
inputs:
10+
run_integration:
11+
description: "Run integration tests"
12+
required: false
13+
default: false
14+
type: boolean
15+
target_env:
16+
description: "Target environment for integration tests"
17+
required: false
18+
default: "env1"
19+
type: choice
20+
options:
21+
- env1
22+
- env2
823

924
permissions:
1025
contents: read
@@ -37,6 +52,12 @@ jobs:
3752
integration:
3853
runs-on: ubuntu-latest
3954
needs: unit
55+
strategy:
56+
matrix:
57+
target_env: [env1]
58+
fail-fast: false
59+
env:
60+
TARGET_ENV: ${{ github.event.inputs.target_env || matrix.target_env }}
4061
steps:
4162
- name: Checkout
4263
uses: actions/checkout@v4
@@ -52,15 +73,33 @@ jobs:
5273
- name: Sync dependencies
5374
run: uv sync
5475

55-
- name: Run integration tests
76+
- name: Select environment secrets
77+
id: envsecrets
78+
run: |
79+
echo "TARGET_ENV=$TARGET_ENV" >> $GITHUB_OUTPUT
80+
- name: Run integration tests (env1)
81+
if: ${{ steps.envsecrets.outputs.TARGET_ENV == 'env1' }}
82+
env:
83+
OPENAI_API_KEY: ${{ secrets.ENV1_OPENAI_API_KEY || secrets.OPENAI_API_KEY }}
84+
OPENAI_BASE_URL: ${{ secrets.ENV1_OPENAI_BASE_URL || secrets.OPENAI_BASE_URL }}
85+
MODEL_ID: ${{ secrets.ENV1_MODEL_ID || secrets.MODEL_ID }}
86+
HF_TOKEN: ${{ secrets.ENV1_HF_TOKEN || secrets.HF_TOKEN }}
87+
run: |
88+
if [ -z "$OPENAI_BASE_URL" ]; then
89+
echo "OPENAI_BASE_URL not set; skipping integration tests for env1";
90+
exit 0;
91+
fi
92+
uv run pytest -q -m integration
93+
- name: Run integration tests (env2)
94+
if: ${{ steps.envsecrets.outputs.TARGET_ENV == 'env2' }}
5695
env:
57-
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
58-
OPENAI_BASE_URL: ${{ secrets.OPENAI_BASE_URL }}
59-
MODEL_ID: ${{ secrets.MODEL_ID }}
60-
HF_TOKEN: ${{ secrets.HF_TOKEN }}
96+
OPENAI_API_KEY: ${{ secrets.ENV2_OPENAI_API_KEY || secrets.OPENAI_API_KEY }}
97+
OPENAI_BASE_URL: ${{ secrets.ENV2_OPENAI_BASE_URL || secrets.OPENAI_BASE_URL }}
98+
MODEL_ID: ${{ secrets.ENV2_MODEL_ID || secrets.MODEL_ID }}
99+
HF_TOKEN: ${{ secrets.ENV2_HF_TOKEN || secrets.HF_TOKEN }}
61100
run: |
62-
if [ -z "${{ secrets.OPENAI_BASE_URL }}" ]; then
63-
echo "OPENAI_BASE_URL not set; skipping integration tests";
101+
if [ -z "$OPENAI_BASE_URL" ]; then
102+
echo "OPENAI_BASE_URL not set; skipping integration tests for env2";
64103
exit 0;
65104
fi
66105
uv run pytest -q -m integration

FRICTION_LOG.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ Basically, for class-based user defined functions, naive initialization usage pa
3434

3535
For some reason, when we use an openai client inside of a function based batch UDF, we can't add the concurrency parameter. We get this runtime error referring to pickle serialization. I ran into this while I was initially developing the batch udf and it took a second to actually reproduce, but it looks like others have run into it. I also started a [thread on slack](https://dist-data.slack.com/archives/C052CA6Q9N1/p1756400464828409) due to my confusion on whether or not the daft.func supported a concurrency argument.
3636

37-
### [Issue 5090](https://github.com/Eventual-Inc/Daft/issues/5088) Scaling headaches
37+
### Scaling headaches (Demonstrated in workload notebook)
3838

3939
For the average user who is looking to leverage daft to perform ai inference using a client (whether it would be openai or otherwise), most users will try either a row_wise UDF or a synchronous Batch UDF. These implementations work at small scale but run into issues once users attempt to run them at 2000 + rows. Regardless of how they arrive at the conclusion, eventually they will attempt to run their inference calls asynchronously which will produce non-blocking errors at the 200-1500 row limit range.
4040

README.md

Lines changed: 8 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -40,36 +40,24 @@ Core Deliverable:
4040
- **Python**: 3.12+
4141
- **uv**: Fast Python package/venv manager. Install:
4242
```bash
43-
curl -LsSf https://astral.sh/uv/install.sh | sh
43+
pip install uv
4444
```
4545

4646
### Install and Setup
47+
Clone this repository and then run
4748
```bash
4849
cd daft-structured-outputs
49-
make setup
50+
uv venv && uv sync
5051
```
51-
This creates a local `.venv` and syncs dependencies from `pyproject.toml`. Prefer running commands with `uv run` without activating the venv.
52+
- This creates a local `.venv` and syncs dependencies from `pyproject.toml`.
53+
- Prefer running commands with `uv run` without activating the venv.
5254

5355
### Environment Variables
54-
These are read by tests and examples. The `makefile` also exports them.
56+
These are read by tests and examples. A `.env.examples` has been provided as a template.
5557
- `OPENAI_API_KEY`: Any non-empty value when using a local vLLM server (e.g., `none`).
56-
- `OPENAI_BASE_URL`: Defaults to `http://0.0.0.0:8000/v1`.
58+
- `OPENAI_BASE_URL`: Defaults to None. vLLM examples default to localhost:8000
5759
- `HF_TOKEN`: Hugging Face token for model pulls. If not set, use `make hf-auth`.
58-
59-
Example:
60-
```bash
61-
export OPENAI_API_KEY=none
62-
export OPENAI_BASE_URL=http://0.0.0.0:8000/v1
63-
export HF_TOKEN=hf_...
64-
```
65-
66-
### Make Targets
67-
```bash
68-
make setup # Create venv and uv sync
69-
make sync # Re-sync dependencies
70-
make activate # Echo activation instructions (prefer `uv run`)
71-
make clean # Remove .venv
72-
```
60+
- `MODEL_ID`: for integration tests and CI
7361

7462
---
7563

makefile

Lines changed: 0 additions & 46 deletions
This file was deleted.

0 commit comments

Comments
 (0)