Skip to content

Commit beb53c2

Browse files
committed
gifs added
1 parent b4540cf commit beb53c2

File tree

9 files changed

+186
-216
lines changed

9 files changed

+186
-216
lines changed

src/components/DemoPage.jsx

Lines changed: 15 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ if __name__ == "__main__":
3838
asyncio.run(main())
3939
`;
4040

41-
const abstractText = "Human-computer interaction has long imagined technology that understands us—from our preferences and habits, to the timing and purpose of our everyday actions. Yet current user models remain fragmented, narrowly tailored to specific applications, and incapable of the flexible, cross-context reasoning required to fulfill these visions. This paper presents an architecture for a general user model (GUM) that can be used by any application. The GUM takes as input any unstructured observation of a user (e.g., device screenshots) and constructs confidence-weighted natural language propositions that capture that user's behavior, knowledge, beliefs, and preferences. GUMs can infer that a user is preparing for a wedding they're attending from a message thread with a friend. Or recognize that a user is struggling with a collaborator's feedback on a draft paper by observing multiple stalled edits and a switch to reading related work. GUMs introduce an architecture that infers new propositions about a user from multimodal observations, retrieves related propositions for context, and continuously revises existing propositions. To illustrate the breadth of applications that GUMs enable, we demonstrate how they augment chat-based assistants with contextual understanding, manage OS notifications to surface important information only when needed, and enable interactive agents that adapt to user preferences across applications. We also instantiate a new class of proactive assistants (GUMBOs) that discover and execute useful suggestions on a user's behalf based on the their GUM. In our evaluations, we find that GUMs make calibrated and accurate inferences about users, and that assistants built on GUMs proactively identify and perform actions of meaningful value that users wouldn't think to request explicitly. From observing a user coordinating a move with their roommate, GUMBO worked backward from the user's move-in date and budget, generated a personalized schedule with logistical to-dos, and recommended helpful moving services. Altogether, GUMs introduce new methods that leverage large multimodal models to understand unstructured user context—enabling both long-standing visions of HCI and entirely new interactive systems that anticipate user needs.";
41+
const abstractText = "Human-computer interaction has long imagined technology that understands us—from our preferences and habits, to the timing and purpose of our everyday actions. Yet current user models remain fragmented, narrowly tailored to specific applications, and incapable of the flexible, cross-context reasoning required to fulfill these visions. This paper presents an architecture for a general user model (GUM) that learns about you by observing any interaction you have with your computer. The GUM takes as input any unstructured observation of a user (e.g., device screenshots) and constructs confidence-weighted natural language propositions that capture that user's behavior, knowledge, beliefs, and preferences. GUMs can infer that a user is preparing for a wedding they're attending from a message thread with a friend. Or recognize that a user is struggling with a collaborator's feedback on a draft paper by observing multiple stalled edits and a switch to reading related work. GUMs introduce an architecture that infers new propositions about a user from multimodal observations, retrieves related propositions for context, and continuously revises existing propositions. To illustrate the breadth of applications that GUMs enable, we demonstrate how they augment chat-based assistants with contextual understanding, manage OS notifications to surface important information only when needed, and enable interactive agents that adapt to user preferences across applications. We also instantiate a new class of proactive assistants (GUMBOs) that discover and execute useful suggestions on a user's behalf based on the their GUM. In our evaluations, we find that GUMs make calibrated and accurate inferences about users, and that assistants built on GUMs proactively identify and perform actions of meaningful value that users wouldn't think to request explicitly. From observing a user coordinating a move with their roommate, GUMBO worked backward from the user's move-in date and budget, generated a personalized schedule with logistical to-dos, and recommended helpful moving services. Altogether, GUMs introduce new methods that leverage large multimodal models to understand unstructured user context—enabling both long-standing visions of HCI and entirely new interactive systems that anticipate user needs.";
4242

4343
const abstractPreview = abstractText.split('. ').slice(0, 3).join('. ') + '.';
4444

@@ -103,9 +103,19 @@ if __name__ == "__main__":
103103
textAlign: 'center'
104104
}
105105
}>
106+
<h3 style={{
107+
color: 'var(--color-main-text)',
108+
textAlign: 'center',
109+
marginBottom: "5px"
110+
}}>
111+
Abstract
112+
</h3>
113+
<p className="abstract-paragraph">
114+
{abstractExpanded ? abstractText : abstractPreview}
115+
</p>
106116
<h4 style={{
107117
color: 'var(--color-main-text)',
108-
margin: '0 0 15px 0',
118+
margin: '20px 0 0px 0',
109119
textAlign: 'center',
110120
width: 'auto',
111121
display: 'inline-flex',
@@ -130,9 +140,6 @@ if __name__ == "__main__":
130140
<FaAngleDown style={{ marginLeft: '8px' }} />
131141
}
132142
</h4>
133-
<p className="abstract-paragraph">
134-
{abstractExpanded ? abstractText : abstractPreview}
135-
</p>
136143
</div>
137144

138145
<div style={{
@@ -219,7 +226,7 @@ if __name__ == "__main__":
219226
width: '1px',
220227
backgroundColor: '#888888',
221228
height: '40px',
222-
margin: '0 8px'
229+
margin: '0 12px'
223230
}}></div>
224231
<div style={{
225232
flex: 1,
@@ -242,6 +249,7 @@ if __name__ == "__main__":
242249
selectedHour={selectedHour}
243250
onTimeChange={handleTimeChange}
244251
activity={currentData.activity}
252+
gif={currentData.gif}
245253
/>
246254
</div>
247255

@@ -286,6 +294,7 @@ if __name__ == "__main__":
286294
selectedHour={selectedHour}
287295
onTimeChange={handleTimeChange}
288296
activity={currentData.activity}
297+
gif={currentData.gif}
289298
/>
290299
</div>
291300
<div className="carousel-pane">

src/components/LeftPane.jsx

Lines changed: 74 additions & 119 deletions
Original file line numberDiff line numberDiff line change
@@ -1,157 +1,112 @@
11
import React, { useEffect, useState } from 'react';
22

3+
/* ──────── GIF imports ──────── */
4+
import inboxclipGif from './gifs/inboxclips.gif';
5+
import excelclipGif from './gifs/excelclip.gif';
6+
import essayclipGif from './gifs/essayclip.gif';
7+
import lunchclipGif from './gifs/lunchclip.gif';
8+
import figmaclipGif from './gifs/figmaclip.gif';
9+
import jobclipGif from './gifs/jobclip.gif';
10+
11+
const gifMap = {
12+
'inboxclips.gif': inboxclipGif,
13+
'excelclip.gif' : excelclipGif,
14+
'essayclip.gif' : essayclipGif,
15+
'lunchclip.gif' : lunchclipGif,
16+
'figmaclip.gif' : figmaclipGif,
17+
'jobclip.gif' : jobclipGif,
18+
};
19+
20+
/* ──────── Slider ──────── */
321
function FancySlider({ min, max, step, value, onChange }) {
422
const sliderRef = React.useRef(null);
523
const [isDragging, setIsDragging] = useState(false);
624

725
useEffect(() => {
8-
function handleMove(clientX) {
26+
const handleMove = clientX => {
927
if (!isDragging || !sliderRef.current) return;
10-
const rect = sliderRef.current.getBoundingClientRect();
11-
const x = clientX - rect.left;
12-
const clampedX = Math.max(0, Math.min(x, rect.width));
13-
const ratio = clampedX / rect.width;
14-
let newValue = min + ratio * (max - min);
15-
newValue = Math.round(newValue / step) * step;
28+
const { left, width } = sliderRef.current.getBoundingClientRect();
29+
const clampedX = Math.max(0, Math.min(clientX - left, width));
30+
const ratio = clampedX / width;
31+
const newValue = Math.round((min + ratio * (max - min)) / step) * step;
1632
onChange(newValue);
17-
}
18-
19-
function handleMouseMove(e) {
20-
handleMove(e.clientX);
21-
}
22-
23-
function handleTouchMove(e) {
24-
if (e.touches && e.touches[0]) {
25-
handleMove(e.touches[0].clientX);
26-
}
27-
}
28-
29-
function handleEnd() {
30-
setIsDragging(false);
31-
}
32-
33-
// Mouse events
34-
window.addEventListener('mousemove', handleMouseMove);
35-
window.addEventListener('mouseup', handleEnd);
33+
};
3634

37-
// Touch events
38-
window.addEventListener('touchmove', handleTouchMove);
39-
window.addEventListener('touchend', handleEnd);
40-
window.addEventListener('touchcancel', handleEnd);
35+
const mouse = e => handleMove(e.clientX);
36+
const touch = e => e.touches[0] && handleMove(e.touches[0].clientX);
37+
const endDrag = () => setIsDragging(false);
4138

39+
window.addEventListener('mousemove', mouse);
40+
window.addEventListener('mouseup', endDrag);
41+
window.addEventListener('touchmove', touch);
42+
window.addEventListener('touchend', endDrag);
43+
window.addEventListener('touchcancel', endDrag);
4244
return () => {
43-
window.removeEventListener('mousemove', handleMouseMove);
44-
window.removeEventListener('mouseup', handleEnd);
45-
window.removeEventListener('touchmove', handleTouchMove);
46-
window.removeEventListener('touchend', handleEnd);
47-
window.removeEventListener('touchcancel', handleEnd);
45+
window.removeEventListener('mousemove', mouse);
46+
window.removeEventListener('mouseup', endDrag);
47+
window.removeEventListener('touchmove', touch);
48+
window.removeEventListener('touchend', endDrag);
49+
window.removeEventListener('touchcancel', endDrag);
4850
};
4951
}, [isDragging, min, max, step, onChange]);
5052

51-
const startDrag = (e) => {
52-
e.preventDefault();
53-
setIsDragging(true);
54-
};
55-
56-
const startTouchDrag = (e) => {
57-
setIsDragging(true);
58-
};
59-
6053
const ratio = (value - min) / (max - min);
6154

6255
return (
63-
<div ref={sliderRef} style={{ position: 'relative', width: '100%', height: '20px' }}>
56+
<div ref={sliderRef} style={{ position:'relative', width:'100%', height:20 }}>
57+
<div style={{
58+
position:'absolute', top:'50%', left:0, transform:'translateY(-50%)',
59+
width:'100%', height:4, background:'rgba(214,206,186,.3)', borderRadius:2
60+
}}/>
61+
<div style={{
62+
position:'absolute', top:'50%', left:0, transform:'translateY(-50%)',
63+
width:`${ratio*100}%`, height:4, background:'#d6ceba', borderRadius:2
64+
}}/>
6465
<div
66+
onMouseDown={e => { e.preventDefault(); setIsDragging(true); }}
67+
onTouchStart={() => setIsDragging(true)}
6568
style={{
66-
position: 'absolute',
67-
top: '50%',
68-
left: 0,
69-
transform: 'translateY(-50%)',
70-
width: '100%',
71-
height: '4px',
72-
backgroundColor: 'rgba(214, 206, 186, 0.3)',
73-
borderRadius: '2px',
74-
}}
75-
/>
76-
<div
77-
style={{
78-
position: 'absolute',
79-
top: '50%',
80-
left: 0,
81-
transform: 'translateY(-50%)',
82-
width: `${ratio * 100}%`,
83-
height: '4px',
84-
backgroundColor: '#d6ceba',
85-
borderRadius: '2px',
86-
}}
87-
/>
88-
<div
89-
onMouseDown={startDrag}
90-
onTouchStart={startTouchDrag}
91-
style={{
92-
position: 'absolute',
93-
top: '50%',
94-
left: `calc(${ratio * 100}% - 10px)`,
95-
transform: 'translateY(-50%)',
96-
width: '20px',
97-
height: '20px',
98-
borderRadius: '50%',
99-
backgroundColor: '#d6ceba',
100-
cursor: 'pointer',
69+
position:'absolute', top:'50%', left:`calc(${ratio*100}% - 10px)`,
70+
transform:'translateY(-50%)', width:20, height:20, borderRadius:'50%',
71+
background:'#d6ceba', cursor:'pointer'
10172
}}
10273
/>
10374
</div>
10475
);
10576
}
10677

107-
const LeftPane = ({ selectedHour, onTimeChange, activity }) => {
108-
const formatTime = (hour) => `Hour ${hour}`;
109-
110-
const dividerStyle = {
111-
width: '80%',
112-
border: 'none',
113-
borderTop: '1px solid #666',
114-
};
78+
/* ──────── Left Pane ──────── */
79+
const LeftPane = ({ selectedHour, onTimeChange, activity, gif }) => {
80+
const gifSrc = gifMap[gif] || inboxclipGif; // fallback to inbox clip
11581

11682
return (
11783
<div
11884
style={{
119-
display: 'flex',
120-
flexDirection: 'column',
121-
gap: '8px',
122-
paddingTop: '24px',
123-
paddingRight: '00px',
124-
alignItems: 'center',
125-
textAlign: 'center',
85+
display:'flex',
86+
flexDirection:'column',
87+
gap:8,
88+
paddingTop:24,
89+
alignItems:'center',
90+
textAlign:'center',
91+
width:'100%'
12692
}}
12793
>
128-
{/* ACTIVITY SHOWCASE SECTION */}
129-
<div>
130-
<div
131-
style={{
132-
border: '2px dashed #999',
133-
height: '200px',
134-
width: '100%',
135-
maxWidth: '300px',
136-
display: 'flex',
137-
alignItems: 'center',
138-
justifyContent: 'center',
139-
margin: '0 auto',
140-
}}
141-
>
142-
<span style={{ fontSize: '14px', color: '#999' }}>GIF Screen</span>
143-
</div>
144-
<p style={{ margin: '15px 0 10px 0', fontSize: '16px' }}>
145-
<b>{activity}</b>
146-
</p>
94+
{/* Activity clip */}
95+
<div style={{ width:'100%' }}>
96+
<img
97+
src={gifSrc}
98+
alt={activity}
99+
style={{ width:'80%', height:'auto', objectFit:'contain', display:'block', margin: '0 auto' }}
100+
/>
101+
<p style={{ margin:'15px 0 10px', fontSize:16 }}><b>{activity}</b></p>
147102
</div>
148103

149-
{/* TIMER SELECTOR SECTION */}
150-
<div style={{ width: '200px', margin: '0 auto' }}>
151-
<div style={{ display: 'flex', alignItems: 'center', gap: '20px' }}>
152-
<span style={{ fontSize: '14px' }}>Start</span>
104+
{/* Hour selector */}
105+
<div style={{ width:200, margin:'0 auto' }}>
106+
<div style={{ display:'flex', alignItems:'center', gap:20 }}>
107+
<span style={{ fontSize:14 }}>Start</span>
153108
<FancySlider min={1} max={6} step={1} value={selectedHour} onChange={onTimeChange} />
154-
<span style={{ fontSize: '14px' }}>End</span>
109+
<span style={{ fontSize:14 }}>End</span>
155110
</div>
156111
</div>
157112
</div>

src/components/gifs/essayclip.gif

898 KB
Loading

src/components/gifs/excelclip.gif

972 KB
Loading

src/components/gifs/figmaclip.gif

1.08 MB
Loading

src/components/gifs/inboxclips.gif

1.43 MB
Loading

src/components/gifs/jobclip.gif

904 KB
Loading

src/components/gifs/lunchclip.gif

1.34 MB
Loading

0 commit comments

Comments
 (0)