Skip to content

Commit 7d41629

Browse files
committed
add pipeline
1 parent a6c8f6f commit 7d41629

File tree

3 files changed

+35
-15
lines changed

3 files changed

+35
-15
lines changed

public/final_pipeline.jpg

2.77 MB
Loading

src/App.js

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ const App = ({ carouselData, suggestionsData, activeChats, setActiveChats }) =>
1313
marginBottom: '20px',
1414
}}
1515
>
16-
GUMBO
16+
GUMBOs are proactive assistants enabled by GUMs
1717
</h2>
1818

1919
{/* App Section */}

src/components/DemoPage.jsx

Lines changed: 34 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -43,10 +43,6 @@ if __name__ == "__main__":
4343
asyncio.run(main())
4444
`;
4545

46-
// const abstractText = "Human-computer interaction has long imagined technology that understands us—from our preferences and habits, to the timing and purpose of our everyday actions. Yet current user models remain fragmented, narrowly tailored to specific applications, and incapable of the flexible, cross-context reasoning required to fulfill these visions. This paper presents an architecture for a general user model (GUM) that learns about you by observing any interaction you have with your computer. The GUM takes as input any unstructured observation of a user (e.g., device screenshots) and constructs confidence-weighted natural language propositions that capture that user's behavior, knowledge, beliefs, and preferences. GUMs can infer that a user is preparing for a wedding they're attending from a message thread with a friend. Or recognize that a user is struggling with a collaborator's feedback on a draft paper by observing multiple stalled edits and a switch to reading related work. GUMs introduce an architecture that infers new propositions about a user from multimodal observations, retrieves related propositions for context, and continuously revises existing propositions. To illustrate the breadth of applications that GUMs enable, we demonstrate how they augment chat-based assistants with contextual understanding, manage OS notifications to surface important information only when needed, and enable interactive agents that adapt to user preferences across applications. We also instantiate a new class of proactive assistants (GUMBOs) that discover and execute useful suggestions on a user's behalf based on the their GUM. In our evaluations, we find that GUMs make calibrated and accurate inferences about users, and that assistants built on GUMs proactively identify and perform actions of meaningful value that users wouldn't think to request explicitly. From observing a user coordinating a move with their roommate, GUMBO worked backward from the user's move-in date and budget, generated a personalized schedule with logistical to-dos, and recommended helpful moving services. Altogether, GUMs introduce new methods that leverage large multimodal models to understand unstructured user context—enabling both long-standing visions of HCI and entirely new interactive systems that anticipate user needs.";
47-
48-
// const abstractPreview = abstractText.split('. ').slice(0, 3).join('. ') + '.';
49-
5046
return (
5147

5248
<div style={{margin: '0 auto', paddingLeft: '5%', paddingRight: '5%', paddingTop: '20px', paddingBottom: '20px' }}>
@@ -90,12 +86,12 @@ if __name__ == "__main__":
9086
</div>
9187

9288
<div style={{ display: 'flex', justifyContent: 'center', gap: '15px', marginBottom: '20px' }}>
93-
<a href="https://arxiv.org" target="_blank" rel="noopener noreferrer" className="start-chat-button" style={{ padding: '12px 12px', fontSize: '16px' }}>
94-
<FaFileAlt style={{ marginRight: '0.5rem', position: 'relative', top: '2px', fontSize: '18px' }} /> Paper
89+
<a href="https://arxiv.org" target="_blank" rel="noopener noreferrer" className="start-chat-button" style={{ padding: '12px 12px', fontSize: '16px', display: 'flex', alignItems: 'center' }}>
90+
<FaFileAlt style={{ marginRight: '0.5rem', fontSize: '18px' }} /> Paper
9591
</a>
9692

97-
<a href="https://github.com/generalusermodels/gum" target="_blank" rel="noopener noreferrer" className="start-chat-button" style={{ padding: '12px 12px', fontSize: '16px' }}>
98-
<FaGithub style={{ marginRight: '0.5rem', position: 'relative', top: '2px', fontSize: '18px' }} /> GitHub
93+
<a href="https://github.com/generalusermodels/gum" target="_blank" rel="noopener noreferrer" className="start-chat-button" style={{ padding: '12px 12px', fontSize: '16px', display: 'flex', alignItems: 'center' }}>
94+
<FaGithub style={{ marginRight: '0.5rem', fontSize: '18px' }} /> GitHub
9995
</a>
10096
</div>
10197

@@ -228,13 +224,16 @@ if __name__ == "__main__":
228224
setActiveChats={setActiveChats}
229225
/>
230226
</DynamicDataProvider>
231-
</div>
227+
<p style={{
228+
marginTop: '15px',
229+
}}>
230+
Above is an example instantiation of GUMBO based on the user's current GUM. Feel free to click on and explore suggestions. Dragging the slider will update the GUMBO's suggestions based on the user's changing GUM.
231+
</p>
232+
</div>
232233
</div>
233234

234-
235-
236235
<div style={{
237-
margin: '30px 0px 0px 0px',
236+
margin: '14px 0px 0px 0px',
238237
padding: '25px 30px',
239238
borderLeft: '4px solid var(--chat-button-bg)',
240239
borderRadius: '6px',
@@ -292,12 +291,33 @@ if __name__ == "__main__":
292291
}}>
293292
How it works
294293
</h3>
294+
295+
<div style={{
296+
display: 'flex',
297+
justifyContent: 'center',
298+
margin: '30px auto',
299+
width: '70%',
300+
backgroundColor: 'white',
301+
padding: '20px',
302+
borderRadius: '8px',
303+
boxShadow: '0 2px 8px rgba(0, 0, 0, 0.2)'
304+
}}>
305+
<img
306+
src="/final_pipeline.jpg"
307+
alt="GUM Pipeline Architecture"
308+
style={{
309+
maxWidth: '100%',
310+
height: 'auto',
311+
}}
312+
/>
313+
</div>
314+
295315
<p style={{
296316
lineHeight: '1.6',
297317
margin: '0',
298318
fontSize: '15px'
299319
}}>
300-
placeholder placeholder placeholder...
320+
A Propose module translates unstructured observations into confidence-weighted propositions about the user's preferences, context, and intent. A Retrieve module indexes and searches these propositions to return the most contextually relevant subset for a given query. Finally, using results from Retrieve, a Revise module reevaluates and refines propositions as new observations arrive. Each module is parameterized by a large multimodal model (in our case, a vision and language model, or VLM).
301321
</p>
302322

303323
<h3 style={{
@@ -315,7 +335,7 @@ if __name__ == "__main__":
315335
margin: '0',
316336
fontSize: '15px'
317337
}}>
318-
placeholder placeholder placeholder...
338+
For GUMs, privacy guarantees are critical from the start. Our general engineering principle here is to rely primarily on open-source models for our study. While closed-source models are more performant, we expect open-source models to be owned by individual users and eventually distilled to be run on local devices. Our study was deployed and run with open-source models. As gaps between closed and open sourced models close and as models become cheaper for inference, model's will become more performant and feasible on commodity hardware. Our implementation is open-source (available on <a href="https://github.com/generalusermodels/gum" target="_blank" rel="noopener noreferrer" style={{ color: '#ff9d9d' }}>GitHub</a>) and uses the OpenAI Completions API. Open source inference platforms like vLLM support the Completions API, and work with systems like GUM.
319339
</p>
320340

321341

0 commit comments

Comments
 (0)