Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
This is a page not in th emain menu
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml
and set future: false
.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in RSS 2020 Workshop on Emergent Behavior in Human-Robot Systems, 2020
Correlated equilibria strategies can be more prosocial, as they can achieve a larger expected sum of rewards compared to pure-strategy Nash equilibria. However, it can be difficult to reach correlated equilibria in multi-agent environments due to non-stationarity. We propose Synchronized \(\epsilon\)-Greedy Exploration, which builds on the commonly-used \(\epsilon\)-greedy exploration, and therefore can be generalized to stochastic games and used in any off-policy learning algorithm.
Recommended citation: M. Beliaev*, W. Wang*, D. Lazar, E. Biyik, D. Sadigh, R. Pedarsani. Emergent Correlated Equilibrium Through Synchronized Exploration. RSS 2020 Workshop on Emergent Behavior in Human-Robot Systems, July 2020.
Published in Robotics: Science and Systems (RSS), 2020
Collaborated with Toyota Research Institute to design policies to safely and efficiently control vehicles in near- accident scenarios using a novel hierarchical reinforcement learning over imitation learning model.
Recommended citation: Z. Cao*, E. Biyik*, W. Wang, A. Raventos, A. Gaidon, G. Rosman, D. Sadigh. Reinforcement Learning based Control of Imitative Policies for Near-Accident Driving. Robotics: Science and Systems (RSS), July 2020.
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
Course Assistant, Stanford University, 2020
Held weekly office hours, edited assignments, and graded assignments for course with over 300 students.
Course Assistant, Stanford University, 2020
Held weekly office hours, graded assignments, and graded exams for proof-based discrete math course.
Course Assistant, Stanford University, 2020
Held weekly office hours, wrote quizzes, and supervised projects for Stanford’s introductory AI course.