Vamos por um k - dimensional vector aleatório, isto é, um conjunto de posição fixa de variáveis aleatórias (funções reais mensuráveis).x=(X1,...,Xj,...,Xk)k−
Considere muitos desses vectores, dizer , e o índice de estes vectores de i = 1 , . . . , N , então, dizerni=1,...,n
e considerá-los como um conjunto chamado "amostra",S=( x 1 ,..., x i ,..., x n ). Então chamamos cadak-
xi=(X1i,...,Xji,...,Xki)
S=(x1,...,xi,...,xn)k− vetor dimensional uma "observação" (embora realmente se torne uma apenas uma vez que medimos e registramos as realizações das variáveis aleatórias envolvidas).
Vamos primeiro tratar o caso em que existe uma função de massa de probabilidade (PMF) ou uma função de densidade de probabilidade (PDF) e também articular essas funções. Denotado por o PMF conjunta ou PDF conjunta de cada vector aleatório, e f ( x 1 , . . . , X i , . . . , X n ) a PMF conjunta ou PDF conjunta de todos estes vectores em conjunto. fi(xi),i=1,...,nf(x1,...,xi,...,xn)
S
f(x1,...,xi,...,xn)=∏i=1nfi(xi),∀(x1,...,xi,...,xn)∈DS
DS is the joint domain created by the n random vectors/observations.
This means that the "observations" are "jointly independent", (in the statistical sense, or "independent in probability" as was the old saying that is still seen today sometimes). The habit is to simply call them "independent observations".
Note that the statistical independence property here is over the index i, i.e. between observations. It is unrelated to what are the probabilistic/statistical relations between the random variables in each observation (in the general case we treat here where each observation is multidimensional).
Note also that in cases where we have continuous random variables with no densities, the above can be expressed in terms of the distribution functions.
This is what "independent observations" means. It is a precisely defined property expressed in mathematical terms. Let's see some of what it implies.
SOME CONSEQUENCES OF HAVING INDEPENDENT OBSERVATIONS
A. If two observations are part of a group of jointly independent observations, then they are also "pair-wise independent" (statistically),
f(xi,xm)=fi(xi)fm(xm)∀i≠m,i,m=1,...,n
This in turn implies that conditional PMF's/PDFs equal the "marginal" ones
f(xi∣xm)=fi(xi)∀i≠m,i,m=1,...,n
This generalizes to many arguments, conditioned or conditioning, say
f(xi,xℓ∣xm)=f(xi,xℓ),f(xi∣xm,xℓ)=fi(xi)
etc, as long as the indexes to the left are different to the indexes on the right of the vertical line.
This implies that if we actually observe one observation, the probabilities characterizing any other observation of the sample do not change. So as regards prediction, an independent sample is not our best friend. We would prefer to have dependence so that each observation could help us say something more about any other observation.
B. On the other hand, an independent sample has maximum informational content. Every observation, being independent, carries information that cannot be inferred, wholly or partly, by any other observation in the sample. So the sum total is maximum, compared to any comparable sample where there exists some statistical dependence between some of the observations. But of what use is this information, if it cannot help us improve our predictions?
Well, this is indirect information about the probabilities that characterize the random variables in the sample. The more these observations have common characteristics (common probability distribution in our case), the more we are in a better position to uncover them, if our sample is independent.
In other words if the sample is independent and "identically distributed", meaning
fi(xi)=fm(xm)=f(x),i≠m
it is the best possible sample in order to obtain information about not only the common joint probability distribution f(x), but also for the marginal distributions of the random variables that comprise each observation, say fj(xji).
So even though f(xi∣xm)=fi(xi), so zero additional predictive power as regards the actual realization of xi, with an independent and identically distributed sample, we are in the best position to uncover the functions fi (or some of its properties), i.e. the marginal distributions.
Therefore, as regards estimation (which is sometimes used as a catch-all term, but here it should be kept distinct from the concept of prediction), an independent sample is our "best friend", if it is combined with the "identically distributed" property.
C. It also follows that an independent sample of observations where each is characterized by a totally different probability distribution, with no common characteristics whatsoever, is as worthless a collection of information as one can get (of course every piece of information on its own is worthy, the issue here is that taken together these cannot be combined to offer anything useful). Imagine a sample containing three observations: one containing (quantitative characteristics of) fruits from South America, another containing mountains of Europe, and a third containing clothes from Asia. Pretty interesting information pieces all three of them -but together as a sample cannot do anything statistically useful for us.
Put in another way, a necessary and sufficient condition for an independent sample to be useful, is that the observations have some statistical characteristics in common. This is why, in Statistics, the word "sample" is not synonymous to "collection of information" in general, but to "collection of information on entities that have some common characteristics".
APPLICATION TO THE OP'S DATA EXAMPLE
Responding to a request from user @gung, let's examine the OP's example in light of the above.
We reasonably assume that we are in a school with more than two teachers and more than six pupils. So
a) we are sampling both pupilss and teachers, and
b) we include in our data set the grade that corresponds to each teacher-pupil combination.
Namely, the grades are not "sampled", they are a consequence of the sampling we did on teachers and pupils. Therefore it is reasonable to treat the random variable G (=grade) as the "dependent variable", while pupils (P) and teachers T are "explanatory variables" (not all possible explanatory variables, just some). Our sample consists of six observations which we write explicitly, S=(s1,...,s6) as
s1=(T1,P1,G1)s2=(T1,P2,G2)s3=(T1,P3,G3)s3=(T2,P4,G4)s4=(T2,P5,G5)s5=(T2,P6,G6)
Under the stated assumption "pupils do not influence each other", we can consider the Pi variables as independently distributed.
Under a non-stated assumption that "all other factors" that may influence the Grade are independent of each other, we can also consider the Gi variables to be independent of each other.
Finally under a non-stated assumption that teachers do not influence each other, we can consider the variables T1,T2 as statistically independent between them.
But irrespective of what causal/structural assumption we will make regarding the relation between teachers and pupils, the fact remains that observations s1,s2,s3 contain the same random variable (T1), while observations s4,s5,s6 also contains the same random variable (T2).
Note carefully the distinction between "the same random variable" and "two distinct random variables that have identical distributions".
So even if we assume that "teachers do NOT influence pupils", then still, our sample as defined above is not an independent sample, because s1,s2,s3 are statistically dependent through T1, while s4,s5,s6 are statistically dependent through T2.
Assume now that we exclude the random variable "teacher" from our sample. Is the (Pupil, Grade) sample of six observations, an independent sample?
Here, the assumptions we will make regarding what is the structural relationship between teachers, pupils, and grades does matter.
First, do teachers directly affect the random variable "Grade", through perhaps, different "grading attitudes/styles"? For example T1 may be a "tough grader" while T2 may be not. In such a case "not seeing" the variable "Teacher" does not make the sample independent, because it is now the G1,G2,G3 that are dependent, due to a common source of influence, T1 (and analogously for the other three).
But say that teachers are identical in that respect. Then under the stated assumption "teachers influence students" we have again that the first three observations are dependent with each other, because teachers influence pupils who influence grades, and we arrive at the same result, albeit indirectly in this case (and likewise for the other three). So again, the sample is not independent.
THE CASE OF GENDER
Now, let's make the (Pupil, Grade) six-observation sample "conditionally independent with respect to teacher" (see other answers) by assuming that all six pupils have in reality the same teacher. But in addition let's include in the sample the random variable "Ge=Gender" that traditionally takes two values (M,F), while recently has started to take more. Our once again three-dimensional six-observation sample is now
s1=(Ge1,P1,G1)s2=(Ge2,P2,G2)s3=(Ge3,P3,G3)s3=(Ge4,P4,G4)s4=(Ge5,P5,G5)s5=(Ge6,P6,G6)
Note carefully that what we included in the description of the sample as regards Gender, is not the actual value that it takes for each pupil, but the random variable "Gender". Look back at the beginning of this very long answer: the Sample is not defined as a collection of numbers (or fixed numerical or not values in general), but as a collection of random variables (i.e. of functions).
Now, does the gender of one pupil influences (structurally or statistically) the gender of the another pupil? We could reasonably argue that it doesn't. So from that respect, the Gei variables are independent. Does the gender of pupil 1, Ge1, affects in some other way directly some other pupil (P2,P3,...)? Hmm, there are battling educational theories if I recall on the matter. So if we assume that it does not, then off it goes another possible source of dependence between observations. Finally, does the gender of a pupil influence directly the grades of another pupil? if we argue that it doesn't, we obtain an independent sample (conditional on all pupils having the same teacher).