# Mathematics - Linear Algebra Functions

Today we continue with **Linear Algebra** getting into **Linear Functions**. Linear Vector Spaces and much more we covered in our series until now can be useful, so you should check them out to get a better understanding. I will talk about some more things in this topice next time to split it up. So, without further do let's get started!

## Linear Functions:

Suppose **two linear vector spaces V and W**. A **function f: V -> W** (from V to W) **is linear** **when**:

- f(u + v) = f(u) + f(v), for every u, v in V
- f(k*v) = kf(v), for every v in V

We sometimes also call a linear function a **morphism or homomorphism** of V. A **function from a vector space to itself** (f: V -> V) is called a **endomorphism**. The "+" operation in the first bullet in the left side was inside of vector space V and in the right side in the vector space W. The same is true in the "*" operation.

So, we can generally say that** a function f: V -> W is linear when f(ku + lv) = kf(u) + lf(v)**, where u, v in V and k, l are real numbers, combining both bullets into one.

### "Properties":

Suppose **f: V -> W a linear morphism** between the vector spaces V, W then:

- f(0) = 0, where the zeros represent the 0 in V and W respectively
- f(-v) = -f(v), v in V

When** V, U and W are vector spaces** and **f: V -> U and g: U -> W are linear morphisms** then the **composition **of g and f**, gof: V-> W is **also** a linear morphism**. I will use them in random alternate order, cause I don't have a preference.

### Examples:

- With V, W being vector spaces we have that f: V -> W, where f(v) = 0 (
**zero morphism**) and Iv: V -> V, where Iv(v) = v (**identity morphism**) are both linear functions - f: R -> R with
**f(x) = ax**, a in R is a linear function. The function f(x) = ax + b is not a linear function tho. - f: R^3 -> R^2, with f(x, y, z) = (x + y, x - z) is also a linear function.

Let's prove the last one.

For (x1, y1, z1), (x2, y2, z2) of R^3 we have:

*f( (x1, y1, z1) + (x2, y2, z2) ) = f( (x1 + x2, y1 + y2, z1 + z2) ) = f( x1 + x2 + y1 + y2, x1 + x2 - z1 - z2 )*

For (x, y , z) in R^3 and k real we have:

*f( k*(x, y, z) ) = f( (kx, ky, kz) ) = f( (kx + ky, kx - kz) ) = k*f(x + y, x - z) = k*f(x, y, z)*

[It may seem a little wierd but we simply follow our function that says that (x, y, z) = (x + y, x - z)]

### Hom-set:

With V and W being vector spaces **Hom(V, W) is the set of all linear morphisms from V to W**.

For all f, g in Hom(V, W) we define an **addition and scalar multiplication**:

- (f + g)(x) = f(x) + g(x), x in V
- (kf)(x) = kf(x), x in V, k in R

As we already said in the beginning the linear function from V to itself is called an **Endomorphism **and we can defineit using Hom like that: **End(V) = Hom(V, V)**.

Kernel and Image:

Suppose f: V -> W is a linear morphism between the vector spaces V and W.

The** kernel of the function f **is a **subset of V** and is defined as:

Ker f = { x in V : f(x) = 0 }

The **image of the function f** is a **subset of W **and is defined as:

f(V) = Im f = { w in W : w = f(v), v in V }

We can prove using what we already know that the **kernel and image are not only subsets but also subspaces**! I will leave it to you if you want to make a quick refreshment of your knowledge :D

**Example:**

Let's get the same function as before that was: f(x, y, z) = (x + y, x - z) where f: R^3 -> R^2.

Now suppose we have v = (x, y, z) that is in the Kerf.

Using the definition of the kernel we have that:

f(x, y, z) = 0 => (x + y, x - z) = (0, 0) => x + y = 0 and x - z = 0 => x = -y and x = z.

So, the random element (x, y, z) of the Kerf is in the form:

(x, -x, x) = x*(1, -1, 1)

That way Kerf = span{ (1, -1, 1) } and dimKerf = 1. (see how everything gets connected?)

Let's now suppose a w in Imf = f(V) then from the defintion of the image we know that there is a v = (x, y, z) in R^3 so that:

w = f(x, y, z) = (x + y, x - z) =>

w = (x, x) + (y, 0) - (0, z) = x*(1, 1) + y*(1, 0) + z*(0, 1) [where the minus can be removed]

So, that way Imf = span{ (1, 1), (1, 0), (0, 1) }.

But, (1, 1) = (1, 0) + (0, 1) and so is linear dependent and that way we have that:

Imf = span{ (1, 0), (0, 1) } and dimImf = 2

You can see that dimKerf + dimImf = 1 + 2 = 3 = dimR^3 that is not random and we will explain it now!

### Monomorphism, Epimorphism and Isomorphism:

Suppose V, W two vector spaces and f: V -> W a linear function.

The function f is called a **monomorphism when f is mono**.

The function f is called a **epimorphism when f is epi**.

The function f is called a **isomorphism when f is mono and epi**.

**Mono **means that for every f(v1) = f(v2) => v1 = v2. It also means that f is invertible.

**Epi** means that f(V) = W and so f "is on top" of W

To check mono, epi and iso-morphism we use the following **Rules**:

- A
**linear function f: V -> W**is a**monomorphism when Kerf = {0}**and vise versa - - || - is a
**epimorphism when Imf = W**and vise versa - -|| - is a
**isomorphism when Kerf = {0} and Imf = W**and vise versa

**Example:**

Suppose f: R -> R^2 where f(x) = (x, 0) for every x in R

For a x in Kerf we have that:

f(x) = 0 => (x, 0) = (0, 0) => x = 0

That way Kerf = {0} and f is a monomorphism

### Monomorphism Rule:

Suppose V, W are two vector spaces and f: V -> W is a monomorphism. When **v1, v2, ..., vn of V **are **linear independent **then** f(v1), f(v2), ..., f(vn) of W** are also **linear independent**.

Also span{ v1, v2, ..., vn } = span{ f(v1), f(v2), ..., f(vn) } ( = means isomorphic )

So, that way when V and W are finite-dimension with dimV <= dimW we know that the subspace E of V goes to the vector space of V using a monomorphism function f and so W = f(E).

### Dimension Rules:

Suppose V and W are finite vector spaces and f: V -> W is a linear function then:

**dimV = dimKerf + dimImf**(as we already found out in an example previously)- When
**dimV = dimW then V = W**(isomorphic) and vise versa

The last one can be proven by saying that when V = W the function f is an isomorphism.

Because f is an isomorphism (mono-part) dimKerf = 0 => dimV = dimImf,

Also (epi-part), Imf = W => dimImf = dimW

And so dimV = dimW

And this is actually it for today and I hope you enjoyed it!

Next time in Linear Algebra we will continue with Linear Functions getting into the function matrix and some special cases and maybe some more examples.

Bye!

bitgeek (62)5 years agoCongratulations @drifter1, this post is the forth most rewarded post (based on pending payouts) in the last 12 hours written by a User account holder (accounts that hold between 0.1 and 1.0 Mega Vests). The total number of posts by User account holders during this period was 1880 and the total pending payments to posts in this category was $1789.22. To see the full list of highest paid posts across all accounts categories, click here.

If you do not wish to receive these messages in future, please reply stop to this comment.

steemiteducation (72)5 years ago