Proabability and Statistics: The Science of Uncert
Michael J. Evans, Jeffrey S. Rosenthal
- 出版商: W.H. Freeman and Com
- 出版日期: 2003-07-25
- 售價: $1,160
- 貴賓價: 9.5 折 $1,102
- 語言: 英文
- 頁數: 638
- 裝訂: Hardcover
- ISBN: 0716747421
- ISBN-13: 9780716747420
-
相關分類:
機率統計學 Probability-and-statistics
下單後立即進貨 (約5~7天)
買這商品的人也買了...
-
$680$578 -
$880$695 -
$1,630$1,549 -
$540$459 -
$760$600 -
$590$466 -
$640$544 -
$750$638 -
$620$527 -
$720$569 -
$650$553 -
$450$351 -
$650$553 -
$850$723 -
$399Software Defined Radio : Baseband Technology for 3G Handsets and Basestations (Hardcover)
-
$620$484 -
$480$379 -
$780$663 -
$750$638 -
$580$493 -
$580$493 -
$450$351 -
$650$507 -
$880$695 -
$750$638
相關主題
商品描述
Unlike traditional introductory math/stat textbooks, Probability and Statistics: The Science of Uncertainty brings a modern flavor based on incorporating the computer to the course and an integrated approach to inference. From the start the book integrates simulations into its theoretical coverage, and emphasizes the use of computer-powered computation throughout.* Math and science majors with just one year of Calculus can use this text and experience a refreshing blend of applications and theory that goes beyond merely mastering the technicalities. They'll get a thorough grounding in probability theory, and go beyond that to the theory of statistical inference and its applications. An integrated approach to inference is presented that includes the frequency approach as well as Bayesian methodology. Bayesian inference is developed as a logical extension of likelihood methods. A separate chapter is devoted to the important topic of model checking and this is applied in the context of the standard applied statistical techniques. Examples of data analyses using real-world data are presented throughout the text. A final chapter introduces a number of the most important stochastic process models using elementary methods.
Contents
1. Probability
Models
1.1 Probability: A Measure of
Uncertainty
1.1.1 Why Do We Need Probability
Theory?
1.2 Probability
Models
1.3 Basic Results for Probability
Models
1.4 Uniform Probability on Finite
Spaces
1.4.1 Combinatorial
Principles
1.5 Conditional Probability and
Independence
1.5.1 Conditional
Probability
1.5.2 Independence of
Events
1.6 Continuity of
P
1.7 Further Proofs
(Advanced)
2. Random Variables and
Distributions
2.1 Random
Variables
2.2 Distribution of Random
Variables
2.3 Discrete
Distributions
2.3.1 Important Discrete
Distributions
2.4 Continuous
Distributions
2.4.1 Important Absolutely Continuous
Distributions
2.5 Cumulative Distribution
Functions
(cdfs)
2.5.1 Properties of Distribution
Functions
2.5.2 Cdf's of Discrete
Distributions
2.5.3 Cdf's of Absolutely Continuous
Distributions
2.5.4 Mixture
Distributions
2.5.5 Distributions Neither Discrete Nor Continuous
(Advanced)
2.6 One-dimensional Change of
Variable
2.6.1 The Discrete
Case
2.6.2 The Continuous
Case
2.7 Joint
Distributions
2.7.1 Joint Cumulative Distribution
Functions
2.7.2 Marginal
Distributions
2.7.3 Joint Probability
Functions
2.7.4 Joint Density
Functions
2.8 Conditioning and
Independence
2.8.1 Conditioning on Discrete Random
Variables
2.8.2 Conditioning on Continuous Random
Variables
2.8.3 Independence of Random
Variables
2.8.4 Sampling From a
Population
2.9 Multi-dimensional Change of
Variable
2.9.1 The Discrete
Case
2.9.2 The Continuous Case
(Advance)
2.9.3
Convolution
2.10 Simulating Probability
Distributions
2.10.1 Simulating Discrete
Distributions
2.10.2 Simulating Continuous
Distributions
2.11 Further Proofs
(Advanced)
3. Expectation
3.1
The Discrete Case
3.2 The Absolutely Continuous
Case
3.3 Variance, Covariance and
Correlation
3.4 Generating
Functions
3.4.1 Characteristic Functions
(Advanced)
3.5 Conditional
Expectation
3.5.1 Discrete
Case
3.5.2 Absolutely Continuous
Case
3.5.3 Double
Expectations
3.5.4 Conditional
Variance
3.6
Inequalities
3.6.1 Jensen's Inequality
(Advanced)
3.7 General Expectations
(Advanced)
3.8 Further Proofs
(Advanced)
4. Sampling Distributions and
Limits
4.1 Sampling
Distributions
4.2 Convergence in
Probability
4.2.1 The Weak Law of Large
Numbers
4.3 Convergence with Probability
1
4.3.1 The Strong Law of Large
Numbers
4.4 Monte Carlo
Approximations
4.5 Convergence in
Distributions
4.5.1 The Central Limit
Theorem
4.6 Normal Distribution
Theory
4.6.1 The Chi-Square
Distribution
4.6.2 The t
Distribution
4.6.3 The F
Distribution
4.7 Further Proofs
(Advanced)
5. Statistical
Inference
5.1 Why Do We Need
Statistics?
5.2 Inference Using a Probability
Model
5.3 Statistical
Models
5.4 Data
Collection
5.4.1 Finite Population
Sampling
5.4.2 Random
Sampling
5.4.3
Histograms
5.4.4 Survey
Sampling
5.5 Some Basic
Inferences
5.5.1 Descriptive
Statistics
5.5.2 Types of
Inference
6. Likelihood
Inference
6.1 The Likelihood
Function
6.1.1 Sufficient
Statistics
6.2 Maximum Likelihood
Estimation
6.2.1 The Multidimensional Case
(Advanced)
6.3 Inferences Based on the
MLE
6.3.1 Standard Errors and
Bias
6.3.2 Confidence
Intervals
6.3.3 Testing Hypotheses and
P-values
6.3.4 Sample Size Calculations: Length of
Confidence
Intervals
6.3.5 Sample Size Calculations:
Power
6.4 Distribution-Free
Models
6.4.1 Method of
Moments
6.4.2
Bootstrapping
6.4.3 The Sign Statistic and Inferences about
Quatiles
6.5 Large Sample Behavior of the MLE
(Advanced)
7. Bayesian
inference
7.1 The Prior and Posterior
Distributions
7.2 Inferences Based on the
Posterior
7.2.1
Estimation
7.2.2 Credible
Intervals
7.2.3 Hypothesis Testing and Bayes
Factors
7.3 Bayesian
Computations
7.3.1 Asymptotic Normality of the
Posterior
7.3.2 Sampling from the
Posterior
7.3.3 Sampling from the Posterior Using Gibbs
Sampling (Advanced)
7.4 Choosing
Priors
7.5 Further Proofs
(Advanced)
8. Optimal
Inferences
8.1 Optimal Unbiased
Estimation
8.1.1 The Cramer-Rao Inequality
(Advanced)
8.2 Optimal Hypothesis
Testing
8.2.1 Likelihood Ratio Tests
(Advanced)
8.3 Optimal Bayesian
Inferences
8.4 Further Proofs
(Advanced)
9. Model
Checking
9.1 Checking the Sampling
Model
9.1.1 Residual Plots and Probability
Plots
9.1.2 The Chi-square Goodness of Fit
Test
9.1.3 Prediction and
Cross-Validation
9.1.4 What Do We Do When a Model
Fails?
9.2 Checking the Bayesian
Model
9.3 The Problem of Multiple
Tests
10. Relationships Among
Variables
10.1 Related
Variables
10.1.1 Cause-Effect
Relationships
10.1.2 Design for
Experiments
10.2 Categorical Response and
Predictors
10.2.1 Random
Predictor
10.2.2 Deterministic
Predictor
10.2.3 Bayesian
Formulation
10.3 Quantitative Response and
Predictors
10.3.1 The Method of Least
Squares
10.3.2 The Simple Linear Regression
Model
10.3.3 Bayesian Simple Linear Model
(Advanced)
10.3.4 The Multiple Linear Regression Model
(Advanced)
10.4 Quantitative Response and
Categorical
Predictors
10.4.1 One Categorical Predictor (One-Way
ANOVA)
10.4.2 Repeated Measures (Paired
Comparisons)
10.4.3 Two Categorical Predictors (Two-Way
ANOVA)
10.4.4 Randomized
Blocks
10.4.5 One Categorical and Quantitative
Predictor
10.5 Categorical Response and
Quantitative Predictors
10.6 Further Proofs
(Advanced)
11. Advance Topic--Stochastic
Processes
11.1 Simple Random
Walk
11.1.1 The Distribution of the
Fortune
11.1.2 The Gambler's Ruin
Problem
11.2 Markov
Chains
11.2.1 Examples of Markov
Chains
11.2.2 Computing with Markov
Chains
11.2.3 Stationary
Distributions
11.2.4 Markov Chain Limit
Theorem
11.3 Markov Chain Monte
Carlo
11.3.1 The Metropolis-Hastings
Algorithm
11.3.2 The Gibbs
Sampler
11.4
Martingales
11.4.1 Definition of a
Martingale
11.4.2 Expected
Values
11.4.3 Stopping
Times
11.5 Brownian
Motion
11.5.1 Faster and Faster Random
Walks
11.5.2 Brownian Motion as a
Limit
11.5.3 Diffusions and Stock
Prices
11.6 Poisson
Processes
11.7 Further
Proofs
Appendices
A. Mathematical
Background
A.1
Derivatives
A.2
Integrals
A.3 Infinite
Series
A.4 Matrix
Multiplication
A.5 Partial
Derivatives
A.6 Multivariable
Integrals
A.6.1 Non-rectangular
Regions
B.
Computations
C. Common Distributions
D. Tables
D.1 Random
Numbers
D.2 Standard Normal
Distributions
D.3 Chi-square Distribution
Probabilities
D.4 Student Distribution
Probabilities
D.5 F Distribution
Probabilities
D.6 Binomial Distribution
Probabilities
Index