Skip to content

Commit a6c8f6f

Browse files
committed
button restyle
1 parent 2209de9 commit a6c8f6f

File tree

1 file changed

+47
-3
lines changed

1 file changed

+47
-3
lines changed

src/components/DemoPage.jsx

Lines changed: 47 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ import Carousel from './Carousel';
55
import App from '../App';
66
import dynamicData from '../data/dynamicData.json';
77
import { DynamicDataProvider } from '../context/DynamicDataContext';
8-
import { FaFileAlt, FaGithub } from 'react-icons/fa'; // Updated icons and arrow icons
8+
import { FaFileAlt, FaGithub, FaArrowDown, FaArrowUp } from 'react-icons/fa'; // Added arrow icons
99
import { Prism as SyntaxHighlighter } from 'react-syntax-highlighter';
1010
import { atomDark } from 'react-syntax-highlighter/dist/esm/styles/prism';
1111

@@ -111,9 +111,53 @@ if __name__ == "__main__":
111111
margin: '0',
112112
fontSize: '15px'
113113
}}>
114-
Human-computer interaction has long imagined technology that understands us-from our preferences and habits, to the timing and purpose of our everyday actions. Yet current user models remain fragmented, narrowly tailored to specific apps, and incapable of the flexible reasoning required to fulfill these visions. This paper presents an architecture for a general user model (GUM) that learns about you by observing any interaction you have with your computer. The GUM takes as input any unstructured observation of a user (e.g., device screenshots) and constructs confidence-weighted propositions that capture that user knowledge and preferences. GUMs can infer that a user is preparing for a wedding they're attending from messages with a friend. Or recognize that a user is struggling with a collaborator's feedback on a draft by observing multiple stalled edits and a switch to reading related work. GUMs introduce an architecture that infers new propositions about a user from multimodal observations, retrieves related propositions for context, and continuously revises existing propositions. To illustrate the breadth of applications that GUMs enable, we demonstrate how they augment chat-based assistants with context, manage OS notifications to selectively surface important information, and enable interactive agents that adapt to preferences across apps. We also instantiate proactive assistants (GUMBOs) that discover and execute useful suggestions on a user's behalf using their GUM. In our evaluations, we find that GUMs make calibrated and accurate inferences about users, and that assistants built on GUMs proactively identify and perform actions that users wouldn't think to request explicitly. Altogether, GUMs introduce methods that leverage multimodal models to understand unstructured context, enabling long-standing visions of HCI and entirely new interactive systems that anticipate user needs.
114+
Human-computer interaction has long imagined technology that understands us-from our preferences and habits, to the timing and purpose of our everyday actions. Yet current user models remain fragmented, narrowly tailored to specific apps, and incapable of the flexible reasoning required to fulfill these visions. This paper presents an architecture for a general user model (GUM) that learns about you by observing any interaction you have with your computer. The GUM takes as input any unstructured observation of a user (e.g., device screenshots) and constructs confidence-weighted propositions that capture that user knowledge and preferences.
115115
</p>
116-
116+
{!abstractExpanded && (
117+
<div style={{ display: 'flex', justifyContent: 'center', marginTop: '15px' }}>
118+
<button
119+
onClick={toggleAbstract}
120+
className="start-chat-button"
121+
style={{
122+
padding: '8px 16px',
123+
fontSize: '14px',
124+
cursor: 'pointer',
125+
display: 'flex',
126+
alignItems: 'center',
127+
gap: '8px'
128+
}}
129+
>
130+
<FaArrowDown style={{ fontSize: '12px' }} /> Expand abstract
131+
</button>
132+
</div>
133+
)}
134+
{abstractExpanded && (
135+
<>
136+
<p style={{
137+
lineHeight: '1.6',
138+
margin: '15px 0 0 0',
139+
fontSize: '15px'
140+
}}>
141+
GUMs can infer that a user is preparing for a wedding they're attending from messages with a friend. Or recognize that a user is struggling with a collaborator's feedback on a draft by observing multiple stalled edits and a switch to reading related work. GUMs introduce an architecture that infers new propositions about a user from multimodal observations, retrieves related propositions for context, and continuously revises existing propositions. To illustrate the breadth of applications that GUMs enable, we demonstrate how they augment chat-based assistants with context, manage OS notifications to selectively surface important information, and enable interactive agents that adapt to preferences across apps. We also instantiate proactive assistants (GUMBOs) that discover and execute useful suggestions on a user's behalf using their GUM. In our evaluations, we find that GUMs make calibrated and accurate inferences about users, and that assistants built on GUMs proactively identify and perform actions that users wouldn't think to request explicitly. Altogether, GUMs introduce methods that leverage multimodal models to understand unstructured context, enabling long-standing visions of HCI and entirely new interactive systems that anticipate user needs.
142+
</p>
143+
<div style={{ display: 'flex', justifyContent: 'center', marginTop: '15px' }}>
144+
<button
145+
onClick={toggleAbstract}
146+
className="start-chat-button"
147+
style={{
148+
padding: '8px 16px',
149+
fontSize: '14px',
150+
cursor: 'pointer',
151+
display: 'flex',
152+
alignItems: 'center',
153+
gap: '8px'
154+
}}
155+
>
156+
<FaArrowUp style={{ fontSize: '12px' }} /> Collapse abstract
157+
</button>
158+
</div>
159+
</>
160+
)}
117161
</div>
118162

119163

0 commit comments

Comments
 (0)