Free Statistics

of Irreproducible Research!

Author's title

Author*The author of this computation has been verified*
R Software Modulerwasp_hypothesismean4.wasp
Title produced by softwareTesting Mean with known Variance - Sample Size
Date of computationThu, 13 Nov 2008 09:30:03 -0700
Cite this page as followsStatistical Computations at FreeStatistics.org, Office for Research Development and Education, URL https://freestatistics.org/blog/index.php?v=date/2008/Nov/13/t1226593922is93lubc1m5ea6o.htm/, Retrieved Sun, 19 May 2024 12:04:13 +0000
Statistical Computations at FreeStatistics.org, Office for Research Development and Education, URL https://freestatistics.org/blog/index.php?pk=24687, Retrieved Sun, 19 May 2024 12:04:13 +0000
QR Codes:

Original text written by user:
IsPrivate?No (this computation is public)
User-defined keywords
Estimated Impact202
Family? (F = Feedback message, R = changed R code, M = changed R Module, P = changed Parameters, D = changed Data)
F     [Testing Mean with known Variance - Sample Size] [case:the pork qua...] [2008-11-12 09:53:32] [9ea94c8297ec7e569f27218c1d8ea30f]
F         [Testing Mean with known Variance - Sample Size] [question 4 pork] [2008-11-13 16:30:03] [f7fbcd402030df685d3fe4ce577d7846] [Current]
Feedback Forum
2008-11-16 14:13:49 [Julie Govaerts] [reply
Gebruikte techniek: Testing Mean with known Variance - Sample Size

We moeten onze betafout van 94% reduceren naar veel kleinere betafout van 5% => de pakkans wordt groter + de variantie verkleind. De kans dat we ons vergissen bij het aanvaarden van de nulhypothese (15%) moet dus zeer klein worden, want we willen zeker zijn dat we 15% vet krijgen. Om deze kans zo klein mogelijk te krijgen, zullen we dus zeer veel steekproeven moeten doen.
Wanneer we zeker zijn dat we 15% vet krijgen, is er dus zeker geen fraude meer.

We hebben 32466 steekproeven nodig om de betafout op 5% te krijgen. De kosten zouden hier veel te hoog oplopen en dit is een zeer omslachtig proces = niet realistisch.
2008-11-17 14:15:18 [Hundra Smet] [reply
de student gebruikte de juiste methode. de conclusie is echter veel te kort. + de student omschrijft de getallen in 10tallen, terwijl dit in 10 000tallen moet.

de juiste en uitgebreidere conclusie is:
we willen type 1 en 2 fout laten dalen (pakkans vergroten). dit doen we door de steekproef te vergroten, waardoor de variantie daalt. we zien in de berekening dat we de steekproef moeten verhogen naar 32 466,5. dit kost echter veel tijd en geld en is bijgevolg niet haalbaar.
2008-11-21 18:45:10 [Gregory De Meulenaer] [reply
Om 95% kans te hebben om fraude te detecteren, zouden we een steekproef moeten
hebben van 32466 (en dus niet 32) testen. Dit lijkt mij in de praktijk niet haalbaar en het zou te veel tijd en geld in beslag nemen.

Post a new message




Summary of computational transaction
Raw Inputview raw input (R code)
Raw Outputview raw output of R engine
Computing time2 seconds
R Server'Gwilym Jenkins' @ 72.249.127.135

\begin{tabular}{lllllllll}
\hline
Summary of computational transaction \tabularnewline
Raw Input & view raw input (R code)  \tabularnewline
Raw Output & view raw output of R engine  \tabularnewline
Computing time & 2 seconds \tabularnewline
R Server & 'Gwilym Jenkins' @ 72.249.127.135 \tabularnewline
\hline
\end{tabular}
%Source: https://freestatistics.org/blog/index.php?pk=24687&T=0

[TABLE]
[ROW][C]Summary of computational transaction[/C][/ROW]
[ROW][C]Raw Input[/C][C]view raw input (R code) [/C][/ROW]
[ROW][C]Raw Output[/C][C]view raw output of R engine [/C][/ROW]
[ROW][C]Computing time[/C][C]2 seconds[/C][/ROW]
[ROW][C]R Server[/C][C]'Gwilym Jenkins' @ 72.249.127.135[/C][/ROW]
[/TABLE]
Source: https://freestatistics.org/blog/index.php?pk=24687&T=0

Globally Unique Identifier (entire table): ba.freestatistics.org/blog/index.php?pk=24687&T=0

As an alternative you can also use a QR Code:  

The GUIDs for individual cells are displayed in the table below:

Summary of computational transaction
Raw Inputview raw input (R code)
Raw Outputview raw output of R engine
Computing time2 seconds
R Server'Gwilym Jenkins' @ 72.249.127.135







Testing Mean with known Variance
population variance0.012
null hypothesis about mean0.15
alternative hypothesis about mean0.152
type I error0.05
type II error0.05
sample size32466.5214491449

\begin{tabular}{lllllllll}
\hline
Testing Mean with known Variance \tabularnewline
population variance & 0.012 \tabularnewline
null hypothesis about mean & 0.15 \tabularnewline
alternative hypothesis about mean & 0.152 \tabularnewline
type I error & 0.05 \tabularnewline
type II error & 0.05 \tabularnewline
sample size & 32466.5214491449 \tabularnewline
\hline
\end{tabular}
%Source: https://freestatistics.org/blog/index.php?pk=24687&T=1

[TABLE]
[ROW][C]Testing Mean with known Variance[/C][/ROW]
[ROW][C]population variance[/C][C]0.012[/C][/ROW]
[ROW][C]null hypothesis about mean[/C][C]0.15[/C][/ROW]
[ROW][C]alternative hypothesis about mean[/C][C]0.152[/C][/ROW]
[ROW][C]type I error[/C][C]0.05[/C][/ROW]
[ROW][C]type II error[/C][C]0.05[/C][/ROW]
[ROW][C]sample size[/C][C]32466.5214491449[/C][/ROW]
[/TABLE]
Source: https://freestatistics.org/blog/index.php?pk=24687&T=1

Globally Unique Identifier (entire table): ba.freestatistics.org/blog/index.php?pk=24687&T=1

As an alternative you can also use a QR Code:  

The GUIDs for individual cells are displayed in the table below:

Testing Mean with known Variance
population variance0.012
null hypothesis about mean0.15
alternative hypothesis about mean0.152
type I error0.05
type II error0.05
sample size32466.5214491449



Parameters (Session):
par1 = 0.012 ; par2 = 0.15 ; par3 = 0.152 ; par4 = 0.05 ; par5 = 0.05 ;
Parameters (R input):
par1 = 0.012 ; par2 = 0.15 ; par3 = 0.152 ; par4 = 0.05 ; par5 = 0.05 ;
R code (references can be found in the software module):
par1<-as.numeric(par1)
par2<-as.numeric(par2)
par3<-as.numeric(par3)
par4<-as.numeric(par4)
par5<-as.numeric(par5)
c <- 'NA'
csn <- abs(qnorm(par5))
if (par2 == par3)
{
conclusion <- 'Error: the null hypothesis and alternative hypothesis must not be equal.'
}
ua <- abs(qnorm(par4))
ub <- qnorm(par5)
c <- (par2+ua/ub*(-par3))/(1-(ua/ub))
sqrtn <- ua*sqrt(par1)/(c - par2)
samplesize <- sqrtn * sqrtn
ua
ub
c
sqrtn
samplesize
load(file='createtable')
a<-table.start()
a<-table.row.start(a)
a<-table.element(a,hyperlink('ht_mean_knownvar.htm','Testing Mean with known Variance','learn more about Statistical Hypothesis Testing about the Mean when the Variance is known'),2,TRUE)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a,'population variance',header=TRUE)
a<-table.element(a,par1)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a,'null hypothesis about mean',header=TRUE)
a<-table.element(a,par2)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a,'alternative hypothesis about mean',header=TRUE)
a<-table.element(a,par3)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a,'type I error',header=TRUE)
a<-table.element(a,par4)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a,'type II error',header=TRUE)
a<-table.element(a,par5)
a<-table.row.end(a)
a<-table.row.start(a)
a<-table.element(a,hyperlink('ht_mean_knownvar.htm#ex4','sample size','example'),header=TRUE)
a<-table.element(a,samplesize)
a<-table.row.end(a)
a<-table.end(a)
table.save(a,file='mytable.tab')