Biasing in random number space
An interesting way to compare the biased random number technique to more conventional biasing techniques is to consider what is really calculated by analog Monte-Carlo. Each history is associated with a random number sequence \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \u...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Buchkapitel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 310 |
---|---|
container_issue | |
container_start_page | 309 |
container_title | |
container_volume | |
creator | Booth, T. E. |
description | An interesting way to compare the biased random number technique to more conventional biasing techniques is to consider what is really calculated by analog Monte-Carlo. Each history is associated with a random number sequence \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
$$\vec r_i$$
\end{document} and a history score \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
$$T(\vec r_i )$$
\end{document} that depends on the random walk specified by \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
$$\vec r_i$$
\end{document}. The random number sequence \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
$$\vec r_i$$
\end{document} is selected from a uniform density \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
$$f(\vec r)$$
\end{document} of random number sequences. The expected score is then: \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
$$E = \smallint T(\vec r)f(\vec r)d\vec r$$
\end{document}.
When the calculation is biased, \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
$$T(\vec r)$$
\end{document} is normally altered to some \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\odd |
doi_str_mv | 10.1007/BFb0049058 |
format | Book Chapter |
fullrecord | <record><control><sourceid>springer</sourceid><recordid>TN_cdi_springer_books_10_1007_BFb0049058</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>springer_books_10_1007_BFb0049058</sourcerecordid><originalsourceid>FETCH-springer_books_10_1007_BFb00490583</originalsourceid><addsrcrecordid>eNqVjs0OwUAURq-_RIuNB5Aubcq95q_dEo0HsJ9MGVJ02szE-6uQWFt9i_Od5ADMCVeEqNbbokTkOYqsBzETHFmuBKo-RCRJppJJGnwASVRIQ4g6TaQZFziGOIQb4kZwThEstpUJlbsmlUu8ceemTtyzLq1PQmtOdgqji3kEO_vuBJbF_rg7pKH1nWW9LpvmHjShfpfpXxn74_oC_-E3AA</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>book_chapter</recordtype></control><display><type>book_chapter</type><title>Biasing in random number space</title><source>Springer Books</source><creator>Booth, T. E.</creator><contributor>Dautray, Robert ; Alcouffe, Raymond ; Ledanois, Guy ; Forster, Arthur ; Mercier, B.</contributor><creatorcontrib>Booth, T. E. ; Dautray, Robert ; Alcouffe, Raymond ; Ledanois, Guy ; Forster, Arthur ; Mercier, B.</creatorcontrib><description>An interesting way to compare the biased random number technique to more conventional biasing techniques is to consider what is really calculated by analog Monte-Carlo. Each history is associated with a random number sequence \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
$$\vec r_i$$
\end{document} and a history score \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
$$T(\vec r_i )$$
\end{document} that depends on the random walk specified by \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
$$\vec r_i$$
\end{document}. The random number sequence \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
$$\vec r_i$$
\end{document} is selected from a uniform density \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
$$f(\vec r)$$
\end{document} of random number sequences. The expected score is then: \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
$$E = \smallint T(\vec r)f(\vec r)d\vec r$$
\end{document}.
When the calculation is biased, \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
$$T(\vec r)$$
\end{document} is normally altered to some \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
$$T'(\vec r)$$
\end{document}, but \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
$$f(\vec r)$$
\end{document}is left unaltered. Note that T and f appear in Eq.(1) in exactly the same way, so that it makes just as much sense to alter f to some f′ and leave T unaltered. Although it will not be discussed here, it is also possible to alter both T and f. That is, it is possible to use biased random numbers together with standard variance reduction techniques.
It should be pointed out that many people, especially in the past decade, have invented techniques that allow machines to estimate appropriate biasing parameters based on the machine's experience with the random walks that were sampled. The salient difference here is that the learning and biasing are done in the random number space.
At first, learning and biasing in the random number space may seem like a strange idea until one realizes that there is a real advantage to working in the random number space. Normally, the user has to supply information about what to learn [1]. Typically, the user states a set of parameters he would like optimized, and the computer will estimate optimal values for these parameters. Unfortunately, it is difficult to write a computer program general enough to allow the user to optimize on all sets of parameters that a user can dream up. In my own work, for example, it was necessary to optimize on space-energy dependent parameters for one class of problems [2], and space-angle dependent parameters for a second class of problems [3]. Furthermore, a special biasing technique had to be developed for the second class of problems. Learning and biasing in the random number space tends to ameliorate these problems.
Examples and recent experiences with this technique will and presented and discussed in [4].</description><identifier>ISSN: 0075-8450</identifier><identifier>ISBN: 3540160701</identifier><identifier>ISBN: 9783540160700</identifier><identifier>EISSN: 1616-6361</identifier><identifier>EISBN: 3540397507</identifier><identifier>EISBN: 9783540397502</identifier><identifier>DOI: 10.1007/BFb0049058</identifier><language>eng</language><publisher>Berlin, Heidelberg: Springer Berlin Heidelberg</publisher><ispartof>Monte-Carlo Methods and Applications in Neutronics, Photonics and Statistical Physics, 2006, p.309-310</ispartof><rights>Springer-Verlag 1985</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed><relation>Lecture Notes in Physics</relation></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/BFb0049058$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/BFb0049058$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>779,780,784,793,27925,38255,41442,42511</link.rule.ids></links><search><contributor>Dautray, Robert</contributor><contributor>Alcouffe, Raymond</contributor><contributor>Ledanois, Guy</contributor><contributor>Forster, Arthur</contributor><contributor>Mercier, B.</contributor><creatorcontrib>Booth, T. E.</creatorcontrib><title>Biasing in random number space</title><title>Monte-Carlo Methods and Applications in Neutronics, Photonics and Statistical Physics</title><description>An interesting way to compare the biased random number technique to more conventional biasing techniques is to consider what is really calculated by analog Monte-Carlo. Each history is associated with a random number sequence \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
$$\vec r_i$$
\end{document} and a history score \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
$$T(\vec r_i )$$
\end{document} that depends on the random walk specified by \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
$$\vec r_i$$
\end{document}. The random number sequence \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
$$\vec r_i$$
\end{document} is selected from a uniform density \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
$$f(\vec r)$$
\end{document} of random number sequences. The expected score is then: \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
$$E = \smallint T(\vec r)f(\vec r)d\vec r$$
\end{document}.
When the calculation is biased, \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
$$T(\vec r)$$
\end{document} is normally altered to some \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
$$T'(\vec r)$$
\end{document}, but \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
$$f(\vec r)$$
\end{document}is left unaltered. Note that T and f appear in Eq.(1) in exactly the same way, so that it makes just as much sense to alter f to some f′ and leave T unaltered. Although it will not be discussed here, it is also possible to alter both T and f. That is, it is possible to use biased random numbers together with standard variance reduction techniques.
It should be pointed out that many people, especially in the past decade, have invented techniques that allow machines to estimate appropriate biasing parameters based on the machine's experience with the random walks that were sampled. The salient difference here is that the learning and biasing are done in the random number space.
At first, learning and biasing in the random number space may seem like a strange idea until one realizes that there is a real advantage to working in the random number space. Normally, the user has to supply information about what to learn [1]. Typically, the user states a set of parameters he would like optimized, and the computer will estimate optimal values for these parameters. Unfortunately, it is difficult to write a computer program general enough to allow the user to optimize on all sets of parameters that a user can dream up. In my own work, for example, it was necessary to optimize on space-energy dependent parameters for one class of problems [2], and space-angle dependent parameters for a second class of problems [3]. Furthermore, a special biasing technique had to be developed for the second class of problems. Learning and biasing in the random number space tends to ameliorate these problems.
Examples and recent experiences with this technique will and presented and discussed in [4].</description><issn>0075-8450</issn><issn>1616-6361</issn><isbn>3540160701</isbn><isbn>9783540160700</isbn><isbn>3540397507</isbn><isbn>9783540397502</isbn><fulltext>true</fulltext><rsrctype>book_chapter</rsrctype><creationdate>2006</creationdate><recordtype>book_chapter</recordtype><sourceid/><recordid>eNqVjs0OwUAURq-_RIuNB5Aubcq95q_dEo0HsJ9MGVJ02szE-6uQWFt9i_Od5ADMCVeEqNbbokTkOYqsBzETHFmuBKo-RCRJppJJGnwASVRIQ4g6TaQZFziGOIQb4kZwThEstpUJlbsmlUu8ceemTtyzLq1PQmtOdgqji3kEO_vuBJbF_rg7pKH1nWW9LpvmHjShfpfpXxn74_oC_-E3AA</recordid><startdate>20060216</startdate><enddate>20060216</enddate><creator>Booth, T. E.</creator><general>Springer Berlin Heidelberg</general><scope/></search><sort><creationdate>20060216</creationdate><title>Biasing in random number space</title><author>Booth, T. E.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-springer_books_10_1007_BFb00490583</frbrgroupid><rsrctype>book_chapters</rsrctype><prefilter>book_chapters</prefilter><language>eng</language><creationdate>2006</creationdate><toplevel>online_resources</toplevel><creatorcontrib>Booth, T. E.</creatorcontrib></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Booth, T. E.</au><au>Dautray, Robert</au><au>Alcouffe, Raymond</au><au>Ledanois, Guy</au><au>Forster, Arthur</au><au>Mercier, B.</au><format>book</format><genre>bookitem</genre><ristype>CHAP</ristype><atitle>Biasing in random number space</atitle><btitle>Monte-Carlo Methods and Applications in Neutronics, Photonics and Statistical Physics</btitle><seriestitle>Lecture Notes in Physics</seriestitle><date>2006-02-16</date><risdate>2006</risdate><spage>309</spage><epage>310</epage><pages>309-310</pages><issn>0075-8450</issn><eissn>1616-6361</eissn><isbn>3540160701</isbn><isbn>9783540160700</isbn><eisbn>3540397507</eisbn><eisbn>9783540397502</eisbn><abstract>An interesting way to compare the biased random number technique to more conventional biasing techniques is to consider what is really calculated by analog Monte-Carlo. Each history is associated with a random number sequence \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
$$\vec r_i$$
\end{document} and a history score \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
$$T(\vec r_i )$$
\end{document} that depends on the random walk specified by \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
$$\vec r_i$$
\end{document}. The random number sequence \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
$$\vec r_i$$
\end{document} is selected from a uniform density \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
$$f(\vec r)$$
\end{document} of random number sequences. The expected score is then: \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
$$E = \smallint T(\vec r)f(\vec r)d\vec r$$
\end{document}.
When the calculation is biased, \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
$$T(\vec r)$$
\end{document} is normally altered to some \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
$$T'(\vec r)$$
\end{document}, but \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
$$f(\vec r)$$
\end{document}is left unaltered. Note that T and f appear in Eq.(1) in exactly the same way, so that it makes just as much sense to alter f to some f′ and leave T unaltered. Although it will not be discussed here, it is also possible to alter both T and f. That is, it is possible to use biased random numbers together with standard variance reduction techniques.
It should be pointed out that many people, especially in the past decade, have invented techniques that allow machines to estimate appropriate biasing parameters based on the machine's experience with the random walks that were sampled. The salient difference here is that the learning and biasing are done in the random number space.
At first, learning and biasing in the random number space may seem like a strange idea until one realizes that there is a real advantage to working in the random number space. Normally, the user has to supply information about what to learn [1]. Typically, the user states a set of parameters he would like optimized, and the computer will estimate optimal values for these parameters. Unfortunately, it is difficult to write a computer program general enough to allow the user to optimize on all sets of parameters that a user can dream up. In my own work, for example, it was necessary to optimize on space-energy dependent parameters for one class of problems [2], and space-angle dependent parameters for a second class of problems [3]. Furthermore, a special biasing technique had to be developed for the second class of problems. Learning and biasing in the random number space tends to ameliorate these problems.
Examples and recent experiences with this technique will and presented and discussed in [4].</abstract><cop>Berlin, Heidelberg</cop><pub>Springer Berlin Heidelberg</pub><doi>10.1007/BFb0049058</doi></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0075-8450 |
ispartof | Monte-Carlo Methods and Applications in Neutronics, Photonics and Statistical Physics, 2006, p.309-310 |
issn | 0075-8450 1616-6361 |
language | eng |
recordid | cdi_springer_books_10_1007_BFb0049058 |
source | Springer Books |
title | Biasing in random number space |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T15%3A23%3A29IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-springer&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=bookitem&rft.atitle=Biasing%20in%20random%20number%20space&rft.btitle=Monte-Carlo%20Methods%20and%20Applications%20in%20Neutronics,%20Photonics%20and%20Statistical%20Physics&rft.au=Booth,%20T.%20E.&rft.date=2006-02-16&rft.spage=309&rft.epage=310&rft.pages=309-310&rft.issn=0075-8450&rft.eissn=1616-6361&rft.isbn=3540160701&rft.isbn_list=9783540160700&rft_id=info:doi/10.1007/BFb0049058&rft_dat=%3Cspringer%3Espringer_books_10_1007_BFb0049058%3C/springer%3E%3Curl%3E%3C/url%3E&rft.eisbn=3540397507&rft.eisbn_list=9783540397502&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |