SULJE VALIKKO

Englanninkielisten kirjojen poikkeusaikata... LUE LISÄÄ

avaa valikko

Sébastien Bubeck | Akateeminen Kirjakauppa

Haullasi löytyi yhteensä 3 tuotetta
Haluatko tarkentaa hakukriteerejä?



Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems
Sébastien Bubeck; Cesa-Bianchi Nicolò
now publishers Inc (2012)
Pehmeäkantinen kirja
92,20
Tuotetta lisätty
ostoskoriin kpl
Siirry koriin
Convex Optimization - Algorithms and Complexity
Sébastien Bubeck
now publishers Inc (2015)
Pehmeäkantinen kirja
96,90
Tuotetta lisätty
ostoskoriin kpl
Siirry koriin
Die KI-Revolution in der Medizin
Peter Lee; Carey Goldberg; Isaac Kohane; Sébastien Bubeck
Pearson Studium (2023)
Pehmeäkantinen kirja
32,50
Tuotetta lisätty
ostoskoriin kpl
Siirry koriin
Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems
92,20 €
now publishers Inc
Sivumäärä: 138 sivua
Asu: Pehmeäkantinen kirja
Julkaisuvuosi: 2012, 12.12.2012 (lisätietoa)
Kieli: Englanti
A multi-armed bandit problem - or, simply, a bandit problem - is a sequential allocation problem defined by a set of actions. At each time step, a unit resource is allocated to an action and some observable payoff is obtained. The goal is to maximize the total payoff obtained in a sequence of allocations. The name bandit refers to the colloquial term for a slot machine (a ""one-armed bandit"" in American slang). In a casino, a sequential allocation problem is obtained when the player is facing many slot machines at once (a ""multi-armed bandit""), and must repeatedly choose where to insert the next coin.

Multi-armed bandit problems are the most basic examples of sequential decision problems with an exploration-exploitation trade-off. This is the balance between staying with the option that gave highest payoffs in the past and exploring new options that might give higher payoffs in the future. Although the study of bandit problems dates back to the 1930s, exploration-exploitation trade-offs arise in several modern applications, such as ad placement, website optimization, and packet routing. Mathematically, a multi-armed bandit is defined by the payoff process associated with each option.

In this book, the focus is on two extreme cases in which the analysis of regret is particularly simple and elegant: independent and identically distributed payoffs and adversarial payoffs. Besides the basic setting of finitely many actions, it also analyzes some of the most important variants and extensions, such as the contextual bandit model. This monograph is an ideal reference for students and researchers with an interest in bandit problems.

Tuotetta lisätty
ostoskoriin kpl
Siirry koriin
LISÄÄ OSTOSKORIIN
Tilaustuote | Arvioimme, että tuote lähetetään meiltä noin 1-3 viikossa.
Myymäläsaatavuus
Helsinki
Tapiola
Turku
Tampere
Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problemszoom
Näytä kaikki tuotetiedot
ISBN:
9781601986269
Sisäänkirjautuminen
Kirjaudu sisään
Rekisteröityminen
Oma tili
Omat tiedot
Omat tilaukset
Omat laskut
Lisätietoja
Asiakaspalvelu
Tietoa verkkokaupasta
Toimitusehdot
Tietosuojaseloste