title
string
question_body
string
answer_body
string
tags
string
accepted
int64
Finding the Supremum of $E(y_t)$?
Suppose we have, $$y_t = a + {\alpha}y_{t-1}+u_t$$ for $t>k$ , where $k$ is a positive integer and $\alpha \in (0,1)$ . And, $$y_t = b + {\alpha}y_{t-1}+u_t$$ for $t\leq k$ . And assume that $a$ and $b$ are two different real constants. $u_t$ are iid with mean $0$ and a variance, $\sigma$ constant and finite. $t \in \mathbb{Z}$ . Now, I am trying to get the moving average of the infinity order process of this. Normally, $\alpha$ being in the interval $(0,1)$ , implies that ${sup}_t \ E(y_t)$ is finite. Thus, the space of sequences of $y_t$ s is Banach space and we proceed with geometric series sum. In this case, do we have to still show that ${sup}_t \ E(y_t)$ is finite? Or, $\alpha$ being in the interval $(0,1)$ implies this? Thank you in advance.
Pick any $t\in\mathbb{Z}$ . Then, it holds $$E[y_t]\le \max\{|a|,|b|\}+\alpha E[y_{t-1}]$$ Supposing, there is an initial value $l\in \mathbb{Z}$ , such that $E[y_l]=C , you find $$E[y_t]\le \max\{|a|,|b|\}\sum_{k=0}^{t+|l|-1}\alpha^k+ |E[y_{l}]| \sum_{k=0}^{t+|l|}\alpha^k \le \frac{1}{1-\alpha}(C+\max\{|a|,|b|\}) The constant does not depend on $t\in\mathbb{Z}$ and therefore $$\sup_{t\in\mathbb{Z}}E[y_t]
|expected-value|
0
Calculating $\int\frac{1}{(x^2+2x+3)\sqrt{x}}\,dx$
My attempt: Factor the denominator: The denominator can be factored as $(x + 1)(x + 3)$ . Substitution: Let's substitute $u = x + 1$ . Notice that $du = dx$ . Rewrite the integral: Substitute $x$ and $dx$ in terms of $u: \displaystyle\int\dfrac{1}{(x+1)(x+3)\sqrt{x}}~dx$ becomes $\displaystyle\int\dfrac{1}{u(u+2)(\sqrt{u-1})}~du$ . I could go further. Please help me.
Your factorization of the quadratic in the denominator is incorrect: In fact, the quadratic is irreducible over $\Bbb R$ , since its discriminant is negative: $2^2 - 4 \cdot 1 \cdot 3 = -8$ . Hint Changing variables via $u = \sqrt x$ rationalizes the integral, so that it can be handled in the usual way: $$\int \frac{du}{u^4 + 2 u^2 + 3} .$$ Now see this problem , which is identical except for the constants.
|calculus|integration|indefinite-integrals|
0
Are $O(n)$ and $SO(n)\times Z_2$ homeomorphic as topological spaces?
I'm working through Problem 4.16 in Armstrong's Basic Topology , which has the following questions: Prove that $O(n)$ is homeomorphic to $SO(n) \times Z_2$ . Are these two isomorphic as topological groups? Some preliminaries: Let $\mathbb{M_n}$ denote the set of $n\times n$ matrices with real entries. We identify each matrix $A=(a_{ij}) \in \mathbb{M_n}$ with the corresponding point $(a_{11},a_{12},...,a_{1n},a_{21},a_{22}...,a_{2n},...,a_{n1},a_{n2},...,a_{nn}) \in \mathbb{E}^{n^2}$ , thus giving $\mathbb{M_n}$ the subspace topology. The orthogonal group $O(n)$ denotes the group of orthogonal $n \times n$ matrices $A \in \mathbb{M_n}$ , i.e. with $det(A)=\pm{1}$ . The special orthogonal group $SO(n)$ denotes the subgroup of $O(n)$ with $det(A)=1$ . $Z_2=\{-1, 1\}$ denotes the multiplicative group of order 2. My attempt For odd $n$ , the answer to both questions is yes , as we verify below. Consider the mapping $f:O(n)\to SO(n)\times Z_2, A \mapsto(det(A)\cdot A, det(A))$ . We have the
Let $$ \mathrm{O}(n) = \{A \in \mathrm{Mat}_n(\mathbb R): A^\intercal A = I_n\}, \quad C_2 = \{\pm 1\} $$ be, respectively, the orthogonal group and the group with two elements. Since $\det\colon \mathrm{Mat}_n(\mathbb R)\to \mathbb R$ , is multiplicative and $\det(A^\intercal) = \det(A)$ , if $A \in \mathrm{O}(n)$ then $1=\det(A^\intercal)\det(A) = \det(A)^2$ , so that $\det(A)\in C_2$ . Moreover, if $s\colon C_2\to \mathrm{O}(n)$ is given by $s(\epsilon) = \mathrm{diag}(\epsilon,1,\ldots,1)$ , then $s$ is a group homomorphism and $\det(s(\epsilon))=\epsilon$ . Since by definition $\mathrm{SO}(n):=\det_{|\mathrm{O}(n)}^{-1}(\{1\})$ it follows that $\mathrm{O}(n)= \mathrm{SO}(n).H$ where $H = s(C_2)$ . An internal product like this can also be described as a semi-direct product . In this case, as a set the semidirect product is $\mathrm{SO}(n)\times C_2$ , but the group operation is not simply componentwise, rather it is twisted by an action of $C_2$ by automorphisms on $\mathrm{SO}(n)
|abstract-algebra|general-topology|algebraic-topology|topological-groups|
0
Stochastic patterns in permutations
I'm a retired computer engineer and I'm studying a bit of abstract algebra these days. I have done some research on the permutation groups of puzzles I've had a copy of the full partition of the 2x2x2 Rubik's cube for a while, and I think I can say it has an interesting structure, this partition. It's embedded in $Z^2$ , but really it has a 3 dimensional structure, you can roll up a section as a cylinder, so each section is a torus, and all the tori are nested. I figured out how to use the metric the authors used to count all the permutations, to embed the flat tori as a nested structure of open 'boxes' in $Z^3$ . I can demonstrate how this is all done with (what I think I've learned about) restriction and induction. I think the puzzles, along with a partition or part of one, are a good learning tool. Lots of ways to look at different things, apart from a color map. But about the stochastic thing; this is about counting those permutations of a puzzle which are a pattern of some kind, a
Are your "imbricated torii" more or less in connection with the following kind of "flattened representation" of the Rubik's cube ?
|real-analysis|
0
Confusion over the representation of a curl of a vector field
Learning vector calculus and I'm still confused over what the curl represents for a vector field. It is stated that the curl represents the magnitude of rotation of surrounding vectors to a given point. But which direction does it point? Let's say we were to use the field $\vec{F}(x,y,z)=\langle-y,x,0\rangle$ , which gives us the velocity vector of a particle moving on the path $x^2+y^2=r^2$ with speed $r$ counter-clockwise on the x-y plane with the plane translated upwards for any $z$ . Intuitively by the argument of rotation, we have circular rotation over any point on the z-axis with the same axis of rotation, while for any other point there doesn't appear to be the same circular behavior surrounding that point, so I would expect the curl to return a different quantity. Yet we obtain $$\nabla\times\vec{F}=\langle0,0,2\rangle$$ a constant vector for any point. If the behavior of the vector field around a point is different for points on the z-axis and any other point, yet the curl yi
If you put an object in this field (just for simplicity, suppose it's a thin bar), then it would experience rotation at the same rate, no matter where it is placed. It's this local rotation-causing torque we're measuring with the curl. The field may also cause the object to travel in a circle, but that isn't itself what the curl measures, even though you can use an integral to connect the two phenomena.
|vector-analysis|curl|
0
Prove that the equation $ax^2+by^2 \equiv c$ (mod $p$) has integer solutions.
Let $p$ be a prime number and $a,b,c$ integers such that $a$ and $b$ are not divisible by $p$ . Prove that the equation $ax^2+by^2 \equiv c$ (mod $p$ ) has integer solutions. I am trying to come up with a solution using Pigeonhole Principle, but I have limited knowledge of number theory. So I am having hard time coming up with an idea to start with.
How limited is "very limited"? And why do you write the post in a command form ("Prove that...")? It is not how you normally go around asking questions in person, surely. That style of writing here is an obvious indication that this is homework. Hint: Treat $p=2$ first, then let $p > 2$ , rewrite the congruence as $ax^2 \equiv -by^2 + c \bmod p$ , and count the number of values of both sides (your coefficients are fixed). How many squares mod $p$ are there (include $0$ ).
|elementary-number-theory|
0
Two ways to write a general solution of a system of linear equations.
When we solve a system of linear equations in $n$ variables by Gauss elimination, there are two ways to write the general solution: As one $n$ -tuple depending on the free variables, As a linear combination of specific vectors, with free variables as coefficients, to which a fixed vector is added. For example: $(2z+3t+4, 5z+6t+7, z+8, t+9)$ , $z(2,5,1,0)+t(3,6,0,1)+(4,7,8,9)$ . Are there any standard names to these two ways to write the solution?
For the second one you already mention "linear combination" in the description but that's what I thought it was actually called. Not sure if the first one has any special name except maybe "solution vector" or something.
|linear-algebra|systems-of-equations|
0
Is this 1st or 2nd order logic?
I'm reading "Set Theory and the Continuum Problem" by Smullyan & Fitting, and on page 16 it says: Thus, we allow $\forall x$ for $x$ a set variable, but not $\forall A$ for $A$ a class variable. But right below that it states: $P_2$ Separation $$(\forall A_1)\ldots(\forall A_n)(\exists B)(\forall c)[x \in B \iff \phi(A_1,\ldots,A_n)]$$ Intuitively, each axiom $P_2$ says that given any subclasses $A_1,\ldots,A_n$ of $V$ there... So I'm confused. Why isn't this quantifying over class variables?
The first quote is regarding the definition of a first-order property, not a universal prohibition. The sentences of the axiom schema are not first-order sentences in this sense, but the formulas $\varphi$ that the schema ranges order are all the first-order formulas. So in other words, what they mean is that for each formula of the form $\varphi(A_1,\ldots, A_n, x)$ that does not include any class quantifiers, $$ \forall A_1\ldots \forall A_n\exists B\forall x(x\in B\iff \varphi(A_1,\ldots, A_n,x))$$ is an axiom. (Note that they should have also stipulated that $\varphi$ does not contain the variable $B$ .) On the general question of "is this first or second-order logic?", even though we talk about class variables as "2nd-order variables" as the classes over which they range are informally collections of sets, NBG is a first-order theory. One can either consider it as a two-sorted first-order theory, or a one-sorted theory where sets are a special type of class (I can't tell which one
|set-theory|first-order-logic|
1
The operation $ (a,b)(c,d)=(ac-bd,ad+bc) $ on $\Bbb R\times\Bbb R\backslash (0,0)$ yields a group
Here is the binary operation $ *: \mathbb{R}\times \mathbb{R} \backslash (0,0) $ defined by $ (a,b)(c,d)=(ac-bd,ad+bc) $ . My idea is that to show this is a group ( $\mathbb{R}\times \mathbb{R} \backslash (0,0), * $ ), I need to show that $ * $ is well-defined and associative and then show it has an identity and inverse. I am struggling to do the first part. How do I show $ * $ is well-defined (and is the first part required)? Is showing that $ ac-bd=0,ad+bc=0 $ will only be true if $a=b=c=d=0$ sufficient?
How do I show $∗$ is well-defined (and is the first part required)? This is indeed required, but it is almost always true because it is very obvious very fast if an operation is well-defined. In general, for a function $f : A \to B$ to be well-defined, these must hold: To each value $a \in A$ , the value $f(a)$ must be defined Moreover, that value must lie in $B$ (i.e. $f(a) \in B$ for all $a \in A$ ) Moreover, that value must be unique, i.e. you cannot send $a \in A$ to two distinct values. Hence if $b,b' \in B$ are such that $b=f(a)$ and $b'=f(a)$ , then $b=b'$ So, for instance, some examples: Consider the mapping $$\begin{align*} f : \mathbb{R} &\to \mathbb{R} \\ x &\mapsto f(x) := \frac 1 x \end{align*}$$ This violates criterion $1$ above: $f(0)=1/0$ is not defined. Consider the mapping $$\begin{align*} f : \mathbb{R} &\to \mathbb{R} \\ x &\mapsto f(x) := \sqrt{x} \end{align*}$$ This violates criterion $1$ above: $\sqrt{-1}$ is not defined. Up to certain preference, you could argue
|abstract-algebra|group-theory|functions|binary-operations|
0
Finding Maximum Value using AM-GM Inequality
Let us have a set of natural numbers $S=\{x_1,x_2,...,x_n\}$ where $n≥4$ , $n$ is even, such that all $(x_i\in S)≥0$ and $\sum_{i=1}^nx_i=1$ . Find the maximum value of $\sum_{i=1}^{n-1}(x_i*x_{i+1})$ . My approach to the problem: We have $x_1,x_2,...,x_n≥0$ . Since $x_1+x_2+...+x_n=1$ , it implies that any $x_i>1$ is not possible. Further, even if one $x_i=1$ , then all other have to be zero. Therefore $0≤x_1,x_2,...,x_n≤1$ . Using AM $≥$ GM , we have $\frac{x_1^2+x_2^2}{2}≥x_1x_2$ with equality $\iff x_1=x_2$ . Now to find $\sum_{i=1}^{n-1}(x_i*x_{i+1})$ , we add respective inequalities. $$\frac{x_1^2+2(x_2^2+x_3^2+...+x_{n-1}^2)+x_n}{2}≥x_1x_2+x_2x_3+...+x_{n-1}x_n$$ with equality holding if and only if $x_1=x_2=...=x_n$ . It gives $RHS_{max}=\frac{n-1}{n^2}$ . Kindly correct me if there is a mistake. Thank you.
Here is a way to use AM-GM to find this maximum: $$\sum_{i=1}^{n-1}x_ix_{i+1} \leqslant(x_1+x_3+...)(x_2+x_4+...)\leqslant \frac{(x_1+x_2+x_3+x_4+\cdots)^2}4=\frac14$$ Equality is possible when any two adjacent $x_i, x_{i+1}$ are $\frac12$ and the rest $0$ , so this gives the maximum.
|real-analysis|algebra-precalculus|inequality|real-numbers|a.m.-g.m.-inequality|
0
The operation $ (a,b)(c,d)=(ac-bd,ad+bc) $ on $\Bbb R\times\Bbb R\backslash (0,0)$ yields a group
Here is the binary operation $ *: \mathbb{R}\times \mathbb{R} \backslash (0,0) $ defined by $ (a,b)(c,d)=(ac-bd,ad+bc) $ . My idea is that to show this is a group ( $\mathbb{R}\times \mathbb{R} \backslash (0,0), * $ ), I need to show that $ * $ is well-defined and associative and then show it has an identity and inverse. I am struggling to do the first part. How do I show $ * $ is well-defined (and is the first part required)? Is showing that $ ac-bd=0,ad+bc=0 $ will only be true if $a=b=c=d=0$ sufficient?
Hint $$(a^2 + b^2)(c^2 + d^2) = (a c - b d)^2 + (a d + b c)^2 .$$
|abstract-algebra|group-theory|functions|binary-operations|
0
Question related to canonical construction of Brownian Motion
I have confusion when I went through the canonical construction of 1-dimensional Brownian Motion. Here we take $\Omega:=C(\mathbb{R}_+,\mathbb{R})$ , and we equip $\Omega$ with the smallest sigma algebra $\mathcal{C}$ such that all coordinate mappings are measurable, and $\mathbb{P}$ to be the Wiener measure. Question 1: by definition of sigma algebra, we should have $\Omega\in\mathcal{C}$ however, from this link: Formally show that the set of continuous functions is not measurable I doubt this is the case (i.e $\Omega\not\in\mathcal{C}$ ) Comment: I think $\mathcal{C}$ here is the "prouct measure" in the above link, or please correct me if I am wrong. Question 2: In the construction, we then set for every $t$ , $B_{t}(\omega)=\omega(t)$ for $\forall \omega\in\Omega$ . However here $\omega\in\mathcal{C}$ is just a random continuous function, which doesn't necessarily to have Brownian property, e.g not differentiable everywhere, recurrent at level $0$ , etc. Why can we still make such c
Yes, $\Omega$ is the set of continuous functions from $[0,\infty)$ to $\mathbb{R}$ . The fact that $\Omega$ is not a measurable set of some other $\sigma$ -algebra is not at all relevant. It's true that there are some elements $\omega \in \Omega$ that don't satisfy properties that Brownian motion does almost surely, such as being non-differentiable. However, those elements have probability $0$ under the Weiner measure. It's the same way we can define a uniform random variable on $\Omega = [0,1]$ with Lebesgue measure by $X(\omega) = \omega$ , even though some $\omega$ s are rational and a uniform random variable is almost surely irrational.
|probability-theory|measure-theory|brownian-motion|wiener-measure|
1
centralizer of the tensor product of von Neumann algebra
Let $M$ and $N$ be two von Neumann algebras. Suppose $\omega_1$ and $\omega_2$ are two normal states of $M$ and $N$ respectively. We consider the tensor product von Neumann algebra $M\otimes N$ . If we assume that $(M\otimes N)_{\omega_1\otimes \omega_2}=M_{\omega_1}\otimes N_{\omega_2}$ , where $(M\otimes N)_{\omega_1\otimes \omega_2}$ is the centralizer of $M\otimes N$ . Can we have the following equality: $$(M\otimes N)_{\omega_1\otimes \omega_2}' \cap (M\otimes N) =(M_{\omega_1}'\otimes N_{\omega_2}')\cap (M\otimes N) = (M_{\omega_1}'\cap M) \otimes (N_{\omega_2}'\cap N).$$
It is generally true that, when $A \subset M$ and $B \subset N$ are von Neumann subalgebras, then $(A \otimes B)’ \cap (M \otimes N) = (A’ \otimes B’) \cap (M \otimes N) = (A’ \cap M) \otimes (B’ \otimes N)$ .
|operator-algebras|von-neumann-algebras|
1
Is there a differentiable $f$ st $f' \ne 0$ st for $a\not \in \mathbb{Q}, \ f(a) \in \mathbb{Q}$?
There are many easy examples for differentiable $f$ st $f'(x) \ne 0$ and for $a \in \mathbb{Q}, \ f(a) \not \in \mathbb{Q}$ for example $\pi x , e^x,$ etc, but the question is: Is the converse true ? i.e Is there a differentiable $f$ st $f'\ne 0$ on $\mathbb{R}$ st for $a\not \in \mathbb{Q}, \ f(a) \in \mathbb{Q}$ ? Of course $f$ couldn't be 1-1 function but can this $f$ exist ? I was not able to find any example. I think the answer is to this question is such function couldn't exist because the cardinality of irrationals is bigger than rations but I couldn't prove that.
Claim. If $f: \mathbb R\to \mathbb R$ is continuous and sends every irrational number to a rational number, then $f$ is constant. Proof. Suppose not. Then there exist $x\ne y$ such that $a=f(x)\ne b=f(y)$ . After relabelling, we can assue that $a . The interval $[a,b]$ has cardinality of continuum. Since $\mathbb Q$ is countable, so is $f(\mathbb Q)$ . Thus, $f(\mathbb Q)\cup f(\mathbb R \setminus \mathbb Q)$ is countable (since $f(\mathbb R \setminus \mathbb Q)\subset \mathbb Q$ ). Hence, $f(\mathbb R)$ cannot contain $[a,b]$ . Thus, there exists $c\in [a,b]$ such that $c\notin f(\mathbb R)$ . This contradicts the intermediate value theorem. qed
|real-analysis|elementary-set-theory|examples-counterexamples|
1
Proof verification: Let $A, C \subseteq X$. Prove that $C - (C \cap A) = C \cap (X-A)$.
Needing this statement for a topology problem - looking for proof verification here. I am always a bit uneasy when using the addition rule. For the forward direction, suppose $x \in C - (C\cap A)$ . Then $x \in C$ and either $x \notin C$ or $x \notin A$ . Since $x \in C$ we conclude $x \notin A$ . Since $A \subseteq X$ we conclude $x \in C \cap (X - A)$ . Conversely, let $x \in C \cap (X - A)$ . Then $x \in C$ and $x \in X$ and $x \notin A$ . Via addition, we may conclude $x \in C$ and, additionally, $x \notin C$ or $x \notin A$ , which implies $x \in C - (C \cap A)$ . QED
$$ \begin{aligned} C-(C\cap A)&=C\cap(X-(C\cap A))\\ &=C\cap((X-C)\cup (X-A))\\ &=(C\cap (X-C))\cup(C\cap (X-A))\\ &=C\cap (X-A) \end{aligned}$$
|elementary-set-theory|solution-verification|
1
Why does the equation $x = 1$ represent a line in a 2-dimensional coordinate system?
I'm posting a question because I was curious about something while studying linear algebra. As we all know, $x = 1$ is a point in a one-dimensional coordinate system. I understand this part. But why does $x = 1$ represent a line in two dimensions? Is it simply defined that way? If anyone knows anything about this, please help. It may seem trivial, but I feel really uncomfortable and think about it every day. Thank you.
The graph is all the points $(x,y)$ such that $x=1$ . There is no constraint on $y$ , so we plot all the points that have $1$ as the first coordinate. This results in the graph you show.
|coordinate-systems|
0
Options for arranging balls in a line
We have unlimited balls from $8$ colors. Balls from the same color are identical. How many options are there to arrange them in a line of $10$ , so that line will include exactly $4$ different color?
Let the $10$ balls (numbered from $1-10$ according to their position) be placed in a row The $4$ colours (out of $8$ ) for these balls can be selected in $C(8,4)$ $(=70)$ ways Now, in order to decide the colour of each numbered ball, we have to distribute $10$ balls among the $4$ selected colours , such that no colour gets 0 balls This can be done in these many ways $4¹⁰ - C(4,1)×3¹⁰ + C(4,2)×2¹⁰ - C(4,3)×1¹⁰ = 818,520$ (for each selection of $4$ colours ) using Principle of Inclusion & Exclusion So, the final answer would be $70 × 818,520 = 57,296,400$
|combinatorics|
0
exponential graphing with intercept
I'm not sure what to do exactly. I got the equation of the line to be $y=\dfrac{p}{\ln\left(p\right)}x+p$ is this correct? Now I need to solve for x using the above.
First, you need to find the $x$ -intercept of $f(x)$ . To do that, you would need to set the function equal to zero, then solve for $x$ . The setup would look like this: $0=-e^{-x}+p\\ p=e^{-x}\\ \ln(p)=-x\\ x=-\ln(p)$ x-intercept = $(-\ln(p),0)$ To get the $y$ -intercept, you need to plug in zero for the $x$ -value, then solve for $y$ . The setup would look like this: $y=-e^{0}+p\\ y=p-1$ y-intercept = $(0,p-1)$ Now that we have the coordinate points of the $x$ and $y$ -intercepts, you need to figure out the equation for a straight line that would pass through both of those points. Since you already know what the $y$ -intercept should be, all you need is the slope of the line, which is $\frac{\text{rise}}{\text{run}}$ : slope = $\frac{p-1}{\ln(p)}$ Final equation: $y=\frac{p-1}{\ln(p)}x+(p-1)$
|graphing-functions|
0
Quadratic form with an absolute lower bound on integer vectors: Conditions for semidefinite matrices
Assume that $X\in \mathrm{PSD}(n)$ is a symmetric, positive semi-definite matrix with real entries and $C>0$ is a positive real number. If $X$ satisfies $$ \forall \alpha\in\mathbb{Z}^n\setminus\{0\},\qquad \alpha^T X \alpha \geq C, $$ then is it true that $X$ is positive definite? That is, if $X$ is lower bounded on integer vectors, is it true that it is lower bounded on whole $\mathbb{R}^n$ ? Can we give a concrete lower bound for $\lambda_{\min}(X)$ in terms of $C,n$ and $\lambda_{\max}(X)$ ? The way I understand, the problem is closely related to approximations of real vectors with rational numbers: Simultaneous version of the Dirichlet's approximation theorem implies that if $v\in\mathbb{R}^n$ is a real vector and $N>0$ is a natural number, then there exists a rational vector $\beta\in\mathbb{Q}^n$ such that $$ \beta=\Big(\,\frac{p_1}{q},\, \frac{p_2}{q},\,\dots,\,\frac{p_n}{q}\,\Big)\; \text{ with }\;1\leq q\leq N\qquad\text{and}\qquad \Vert v-\beta\Vert Note that $\beta^T X \bet
Suppose for a contradiction that the matrix $X$ is not positive definite. As far as I remember, then there exist natural $m and linear maps $f_1,\dots,f_m:\mathbb R^n\to\mathbb R$ such that $v^TXv=\sum_{i=1}^m f_i(v)^2$ for each $v\in\mathbb R^n$ . Put $$M=\sup \{|f_i(v)|:v\in\mathbb R^n,\,\|v\|\le 1,\,i\in\{1,\dots,m\}\}.$$ Define a map $f:\mathbb R^n\to\mathbb R^m$ such that for each $v\in\mathbb R^n$ and each natural $i\le m$ , the $i$ th component of the vector $f(v)$ is $f_i(v)$ . Pick any positive $c and put $K_m=\left[0,\sqrt{\frac{c}{m}}\right]^m$ . For any natural number $N$ put $I_N=\{-N,-N+1,\dots,N-1,N\}^n\subset\mathbb R^n$ . Then $f(N)$ is contained in the cube $J_N=[-MN,MN]^m\subset R^m$ . Put $L=\left\lceil 2N\sqrt{\frac{m}{c}}\right\rceil$ . Then there exists a subset $A_N$ of $\mathbb R^n$ such that $|A_N|\le (NL)^m$ and $J_N\subset A+K_m$ . So if $ (2N+1)^n>(LN)^m$ then by the pigeonhole principle there exist $a\in A_N$ and distinct $\alpha,\alpha'\in I_N$ such that
|linear-algebra|number-theory|quadratic-forms|
0
The standard error of the mean
I came across this formula for the standard error of the mean in a book and wondered how I might be misunderstanding it? $$\widehat{SE} = \sqrt{\sum_{i=1}^n \frac{(x_i-\bar{x})^2}{n(n-1)}}$$ I would have thought it should be $$\widehat{SE} = \frac{\sigma_x}{\sqrt{n}}$$ Could it be that the above refers the variance of the mean rather than the standard error? Is the standard error the same as the standard deviation? Where is $\sigma_x$ is the standard deviation of the sample: $$\sigma_x = \sqrt{\sum_{i=1}^n \frac{(x_i-\bar{x})^2}{n}}$$
The first formula $$\widehat{SE} = \sqrt{\sum_{i=1}^n \frac{(x_i - \bar x)^2}{n(n-1)}}$$ is a point estimator of the standard error of the sample mean. It is a statistic, and it estimates the standard error. The true standard error of the sample mean is $$SE = \frac{\sigma}{\sqrt{n}},$$ where $\sigma$ is the population standard deviation, and $n$ is the sample size. This is because for independent and identically distributed $X_1, \ldots, X_n$ with common variance $\sigma^2$ , $$\operatorname{Var}[\bar X] = \operatorname{Var}\left[\frac{1}{n} \sum_{i=1}^n X_i\right] \overset{\text{iid}}{=} \frac{1}{n^2} \sum_{i=1}^n \operatorname{Var}[X_i] = \frac{1}{n^2} \cdot n \sigma^2 = \frac{\sigma^2}{n}.$$ Thus the standard deviation of the sampling distribution of the sample mean--i.e., the standard error--is $$SE = \sqrt{\operatorname{Var}[\bar X]} = \frac{\sigma}{\sqrt{n}}.$$ But $\sigma$ is a parameter. If it is unknown, then we can only estimate the standard error through observing a sample.
|mean-square-error|
1
exponential graphing with intercept
I'm not sure what to do exactly. I got the equation of the line to be $y=\dfrac{p}{\ln\left(p\right)}x+p$ is this correct? Now I need to solve for x using the above.
Find $z$ such that $f(z)=0$ : $$-e^{-z}+p=0 \Leftrightarrow -e^{-z}=-p \Leftrightarrow e^{-z}=p \Leftrightarrow -z = \ln(p) \Leftrightarrow z= -\ln(p).$$ Compute $f(0)$ : $$f(0)=-e^{-0}+p=-1+p=p-1.$$ Your linear equation needs to go through the points $$ \begin{aligned} (x_1,y_1)&=(z,0)=(-\ln(p),0)\\ (x_2,y_2)&=(0,f(0))=(0,p-1) \end{aligned}$$ Therefore your linear equation $y(x)=mx+c$ needs to satisfy $$ \begin{aligned} y(z)&= 0\\ y(0)&= p-1 \end{aligned}$$ You find $c$ by calculating $$ \begin{aligned} p-1&=y(0)= m\cdot 0 + c = c\\ \end{aligned}$$ After that you can find $m$ by calculating $$ \begin{aligned} 0=y(z)= m \cdot z + c = m \cdot (-\ln(p)) + p-1 &\Leftrightarrow -(p-1) = m \cdot (-\ln(p))\\ & \Leftrightarrow \frac{p-1}{\ln(p)} = m \end{aligned} $$ You end up with $$c=p-1 \quad \text{and}\quad m=\frac{p-1}{\ln(p)}.$$ You are searching for the equation $$y(x)=\frac{p-1}{\ln(p)}x+(p-1).$$
|graphing-functions|
1
Classification of automorphy factors for $\mathrm{SL}_2(\mathbb{R})$ on the upper half-plane
By an automorphy factor (or a factor of automorphy ) for $\mathrm{SL}_2(\mathbb{R})$ on the upper half-plane $\mathbb{H}$ , I mean a continuous map $$j \colon \mathrm{SL}_2(\mathbb{R}) \times \mathbb{H} \to \mathbb{C}^\times$$ such that the following conditions are satisfied: (i) For each $g \in \mathrm{SL}_2(\mathbb{R})$ , the resulting map $\mathbb{H} \to \mathbb{C}$ , $z \mapsto j(g,z)$ is smooth . (ii) $j$ is a 1-cocycle in the sense that $$j(g_1 g_2, z) = j(g_1, g_2 z) \cdot j(g_2, z)$$ for all $g_1, g_2 \in \mathrm{SL}_2(\mathbb{R})$ and $z \in \mathbb{H}$ , where $\mathrm{SL}_2(\mathbb{R})$ acts on $\mathbb{H}$ by Möbius transforms. As the title of my question indicates, I would like to classify all such automorphy factors. Of course, there is a trivial way to obtain automorphy factors (so-called 1-coboundaries), which needs to be excluded: For any smooth function $f \colon \mathbb{H} \to \mathbb{C}^\times$ , we obtain the automorphy factor $j_f(g,z) := f(gz)/f(z)$ . A non-trivi
I have just seen this question a couple of days ago. I believe the answer to your question is yes. First note that if we take $$ h(z) = \sqrt{\mathrm{Im}(z)}, $$ then $$ j_h(z)j(g,z)=\frac{1}{cz+d}, $$ so we can assume that $j(g,z)=(cz+d)^{-1}$ (this is a little easier to manipulate). Now, to each automorphy factor $f:\mathrm{SL}_2(\mathbb{R})\times \mathbb{H}\to \mathbb{C}^\times$ we associate a $\mathrm{SL}_2(\mathbb{R})$ -equivariant complex line bundle on $\mathbb{H}$ as follows: as a manifold, $$ L_f = \mathbb{H}\times \mathbb{C} $$ with the obvious projection onto $\mathbb{H}$ . The action of $\mathrm{SL}_2(\mathbb{R})$ on $L_f$ is given by the formula $$ g\cdot (z,w) = (gz,f(g,z)w). $$ It is easy to see that if $f_1$ and $f_2$ are automorphy factors such that $L_{f_1}$ is isomorphic (as equivariant line bundles) to $L_{f_2}$ , then $f_1$ and $f_2$ are equal up to coboundary. Also, it is easy to see that for every pair $f_1,f_2$ of automorphy factors, $$ L_{f_1}\otimes L_{f_2} \c
|reference-request|modular-forms|automorphic-forms|
0
Conjecture: The sequence $\frac{2}{n} \sum_{i=1}^n \sqrt{\frac{n}{i-\frac{1}{2}}-1}$ converges to $\pi$
I found that the series $$s(n) = \frac{2}{n} \cdot \sum_{i=1}^n \sqrt{\frac{n}{i-\frac{1}{2}}-1}$$ converges to $\pi$ as $n \to \infty$ . To verify this I have computed some values: $n$ $s(n)$ $10^1$ 2.76098 $10^3$ 3.10333 $10^5$ 3.13776 $10^6$ 3.14038 $10^7$ 3.14121 Which seems to support the claim, however, this is no proof of the convergence. I would not know how to begin on a proof of this limit and did not find any similar formula in known approximation formulas . Does anyone have an idea on how such a proof can be constructed?
Consider the sum $$\frac{1}{n}\sum_{k=1}^{n}f\left(\frac{2k-1}{2n}\right),$$ where $f$ is decreasing on $(0,1]$ . If $\int_{0}^{1}f(x)\,{\rm d}x$ is convergent or is Riemann integral, then $$\lim_{n\to\infty}\frac{1}{n}\sum_{k=1}^{n}f\left(\frac{2k-1}{2n}\right) =\int_{0}^{1}f(x)\,{\rm d}x.$$ Proof: For $1\leq k\leq n-1$ , we have $$\frac{1}{n}f\left(\frac{2k+1}{2n}\right) \leq\int_{\frac{2k-1}{2n}}^{\frac{2k+1}{2n}}f(x)\,{\rm d}x \leq f\left(\frac{2k-1}{2n}\right).$$ $$\frac{1}{n}f\left(\frac{1}{2n}\right) \leq\int_{0}^{\frac{1}{2n}}f(x)\,{\rm d}x +\frac{1}{2n}f\left(\frac{1}{2n}\right),$$ $$\frac{1}{2n}f\left(\frac{2n-1}{2n}\right) +\int_{\frac{2n-1}{2n}}^{1}f(x)\,{\rm d}x \leq\frac{1}{n}f\left(\frac{2n-1}{2n}\right).$$ By the above inequalities, we get $$\frac{f\left(\frac{2n-1}{2n}\right)}{2n}+\int_{\frac{1}{2n}}^1f(x)\,{\rm d}x \leq\frac{1}{n}\sum_{k=1}^{n}f\left(\frac{2k-1}{2n}\right) \leq\int_{0}^{\frac{2n-1}{2n}}f(x)\,{\rm d}x+\frac{f\left(\frac{1}{2n}\right)}{2n}.\tag{1}$$ It
|sequences-and-series|approximation|pi|
0
I have a doubt for a combinatorics question below and want to know where am I going wrong.
In how many different ways can 3 men and 4 women be placed into two groups of two people and one group of three people if there must be at least one man and one woman in each group? Note that identically sized groups are indistinguishable. grp $1(1M$ and $1W)$ ---------------grp $2(1M$ and $1W)$ -----------------grp $3(1M$ and $2W$ ). so for grp 1 we have 3 choices of men and 4 choices of women(3×4). for grp2 we are left with 2 men and 3 women so we have 2 choices for men and 3 choices for women(2×3). And for grp 3 we are left with 1 men and 2 women which is 1 whole grp choice(1). so equation becomes $3×4×2×3×1=72$ . But the answer is half of it $36$ . I would appreciate your answer to make me understand where am I wrong.
Start with men. Make all $3$ of them stand separately Now each man represents one distinct group First we have to divide $4$ distinct women as $1+1+2$ . This can be done in $\dfrac{4!}{(1!)^2(2!)(2!)} = 6$ ways Now we have to distribute them in $3$ distinct groups (of men). This can be done in $6 × 3! = 36$ ways Hence the answer is $36$ PS : If you find any difficulty in understanding this answer, do read the topic Division and Distribution of Distinct Objects under Permutations & Combinations
|probability|combinatorics|permutations|
0
Functional equation $x^n f(1/x) = f(x)$
I started playing with the functional equation $x^n f(1/x) = f(x)$ , with $n \in \mathbb N$ . I found two solutions: $g(x)=a x^{n/2}$ and $h(x)=b(1+x+...+x^n)=b S_n(x)$ . I also found that a linear combination of $g,h$ is also a solution. Any other solutions? More general ideas?
Clearly, $x\neq 0$ for the functional equation to even make sense. Let $P(x)=\frac{f(x)}{x^{n/2}}$ . Now, $$x^{n/2}f(1/x)=\frac{f(x)}{x^{n/2}}$$ $$P(1/x)=P(x)$$ Thus $P(1/x)=P(x)$ is necessary. Plugging in $f(x)=x^{n/2}$ to the functional equation, this condition is also sufficient. We now have our answer: all functions $P$ for which $P(x)=P(1/x)$ will yield a valid function $f$ under $f(x)=x^{n/2}P(x)$ , as this maintains a necessary and sufficient condition. Notice that since $f$ is not specified to be continuous, $P$ could even be a point wise function satisfying the condition. There isn't much restriction. In the results you provided, the first has $P(x)=a$ and the second has $P(x)=b\sum_{i=-n/2}^{n/2}x^i$ , both of which satisfy the given condition.
|functional-equations|
0
Definition of monotonic function
By monotonic function, we refer to an increasing function over $X \subseteq \mathbb{R}$ . Why is the definition of monotonic function always $f(x) \geq f(y) \implies x \geq y$ instead of $x \geq y \implies f(x) \geq f(y)$ ? Let's say $f$ is increasing according to the definition: $f(x) \geq f(y) \implies x \geq y$ . Suppose, for the sake of contradiction, there's some $x, y \in X \ (x \geq y)$ such that $f(x) . From the definition, we have $f(x) . This can only mean that $x = y$ but then, we can't have $f(x) , and hence we arrive at a contradiction. Thus, the first definition implies the second.
$$f(x)\ge f(y) \Rightarrow x\ge y$$ is equalivalent to $$x
|real-analysis|functions|
1
Show that $1 + \sqrt{2}$ is a unit in $\Bbb Z[\sqrt2]$
I'm a beginner to number theory, and in a text book, right after proving the fundamental theorem of arithmetic, the following problem is stated: Let $H_m$ be the subset of real numbers, which can be written in the form of $x + y\sqrt{m}$ , where $x$ and $y$ are integers and $m$ isn't a square number. Show that besides $\pm1$ , $1 + \sqrt{2}$ and $3 + 2\sqrt{2}$ are also units in $H_2$ . The book recommends defining divisibility, units and undecomposablity in the $H_m$ set first. How should I begin solving the problem? Any kind of help is welcome. (I'm not that familiar with the technicalities in English, I'm sorry.)
Units in a ring $R$ form a group $U$ called the group of units. It is easy to see this. In particular, if $x\in U$ then $x^n\in U$ for any $n\in\Bbb Z.$ We can easily see that $x=\sqrt2 +1$ is a unit in the ring $\Bbb Z[\sqrt2]$ : $$1=2-1=(\sqrt2+1)(\sqrt2-1).$$ So, $x^2=3+2\sqrt2$ is a unit as well. As in some sense said, we can use "Norm" of the ring to show that a number is a unit. There is a very deep theorem about the group structure of the group of units in the ring of algebraic integers of a number field.
|elementary-number-theory|
0
4th Order Linear Homogeneous Differential Equation
Please solve the following Differential Equation: $y''''+2y'''-9y''-10y'+50y=0$ I am unable to find roots of its auxiliary equation.
Hint: $r^4+2r^3-9r^2-10r+50=(r^2-4r+5)(r^2+6r+10)$
|ordinary-differential-equations|
0
How do I determine convergence with a comparison test?
$$\sum_{n=2}^\infty \frac{1}{3n^2-2\sqrt{n}}$$ The instructions are to use a comparison test to determine convergence of the series. I thought to compare it to $$\sum_{n=2}^\infty \frac{1}{3n^2}$$ which converges by the p-series test. But, that series is smaller than the original series, so it doesn't prove that the original series will converge too. Where do I go from here?
Hint: $3n^2-2\sqrt{n}\ge n^2$ for $n\ge 2$ . In case you do not know, I provide more steps below. $3n^2-2\sqrt{n}\ge n^2\iff 2n^2\ge 2\sqrt{n}\iff n^4\ge n\iff n\ge1$ . So we have $$\sum^m_{n=2}\dfrac{1}{3n^2-2\sqrt{n}}\le\sum^m_{n=2}\dfrac{1}{n^2}$$ for each $m\ge2$ . By $\cdots$
|sequences-and-series|
1
Rudin PMA theorem 8.14.
Code borrowed from here (This question has been asked there before.). A few Definitions that needed: Dirichlet kernel: $$\tag{77}D_N(x) = \sum_{n=-N}^Ne^{inx} = \frac{\sin\left( (N+\frac12)x \right)}{\sin(x/2)} $$ $$\tag{78}s_N(f; x) = \frac{1}{2\pi}\int_{-\pi}^{\pi} f(x - t) D_N(t)\, dt $$ Theorem 8.14: If, for some $x$ , there are constants $\delta > 0$ and $M such that: $$\tag{79}\left| f(x + t) - f(x) \right| \leq M|t| $$ for all $t \in (-\delta, \delta)$ , then $$\tag{80}\lim_{N\rightarrow\infty} s_N(f; x) = f(x) $$ And the proof goes as follows: Define $$\tag{81}g(t) = \frac{f(x-t) - f(x)}{\sin(t/2)} $$ for $0 , and put $g(0) = 0$ . By the definition $(77)$ , $$\frac{1}{2\pi} \int_{-\pi}^{\pi}D_N(x)\, dx = 1.$$ Hence $(78)$ shows that \begin{align}s_N(f; x) - f(x) &= \frac{1}{2\pi} \int_{-\pi}^{\pi} g(t) \sin\left((N +\frac12)t\right)\\ &= \frac{1}{2\pi} \int_{-\pi}^{\pi} \left[ g(t)\cos\frac{t}2 \right]\sin(Nt) \space dt \\ &\qquad+ \frac{1}{2\pi} \int_{-\pi}^{\pi} \left[ g(t)\s
It is $(74)$ that is used. You have, since $e^{-i nt}=\cos(nt)+i \sin(nt)$ and with $h(t)=g(t)\,\cos t/2$ , $$ c_n=\frac1{2\pi}\int_{-\pi}^{\pi}h(t)e^{-i nt}\,dt =\frac1{2\pi}\int_{-\pi}^{\pi}h(t)\cos nt\,dt -i \frac1{2\pi}\int_{-\pi}^{\pi}h(t)\sin nt\,dt $$ From $(74)$ we know that $c_n\to0$ , and so its real and imaginary parts go to zero. In particular, $$ \lim_{N\to \infty}\int_{-\pi}^{\pi} \left[ g(t)\cos\frac{t}2 \right]\sin(Nt) \,dt=0, $$ and the same for the other integral. The answer in the question you linked is wrong, because the estimates ignore that the sine and the cosine are not positive. Edit: Details on Riemann integrability of $g(t)\sin\frac t2$ and $g(t)\cos\frac t2$ . For $t\in[-\pi,\pi]$ , the function $\sin \frac t2$ has a single zero at $t=0$ . So, on any interval $[-\pi,-\delta]\cup[\delta,\pi]$ , the function $1/\sin\frac t2$ is continuous. So the situation is, we have a function $h:[-\pi,\pi]\to\mathbb R$ , bounded, and such that $h$ is Riemann integrable on $
|real-analysis|proof-explanation|fourier-series|
1
Is this sequence always eventually periodic regardless of starting value?
$$a(n)=a(\lceil \mathop{\rm abs}(a(n-1)) \rceil\bmod n)) + a(\lceil \mathop{\rm abs}(a(n-2)) \rceil\bmod n))$$ For starting values, $a(0)=a(1)=1$ , the sequence has a cycle starting with $n=441329$ having a period of $63584$ (source: https://oeis.org/A330615 ) For starting value $a(0)=i$ and $a(1)=1$ , the cycle starts at $n=35694$ and have a period of $3605$ . My conjecture is that for any complex starting values, this sequence eventually cycles. Can anybody prove or disprove this conjecture? Newer conjecture: The sequence either cycles or eventually forms a arithmetic progression
For starting values $a(0) = 1$ and $a(1) = 2$ , the sequence simply satisfies $a(n) = n+1$ and does not eventually cycle; hence the conjecture is false.
|sequences-and-series|elementary-number-theory|complex-numbers|recurrence-relations|
1
Proof of Triangle Inequality for $d(g; x, y) = \left(|x-y|^4 + g\,| x \times y |^2\right)^{\frac{1}{4}}$
I am seeking assistance in proving that a function, denoted as $d(g; x, y)$ , defined on $\mathbb{R}^2 \times \mathbb{R}^2$ and parameterized by the non-negative real number $g$ , may satisfy the triangle inequality. The function is defined as follows: \begin{align} d(g; x, y) &:= \left(|x-y|^4 + g\,| x \times y |^2\right)^{\frac{1}{4}} \\ &= \left( \left((x_1 - y_1)^2 + (x_2 - y_2)^2\right)^2 + g\,(x_1\,y_2 - x_2\,y_1)^2 \right)^{\frac{1}{4}} \end{align} where $x=(x_1,x_2),\,y=(y_1, y_2)$ . It is noteworthy that when $g=0$ , $d(g;x,y)$ coincides with the Euclidean distance. My ultimate goal is to prove that $d(g; x, y)$ is a distance function on $\mathbb{R}^2$ for a certain range of $g$ . While it is trivial that $d(g; x, y)$ is non-degenerate and symmetric with respect to $x$ and $y$ , the proof of the triangle inequality is not straightforward. Through numerical calculation, I observed that the triangle inequality seemed to hold in the range $0\leq g \leq 6$ . In other words, the co
Some thoughts. Let $$u := \frac{\Big((x_1 - y_1)^2 + (x_2 - y_2)^2\Big)^2 + g(x_1y_2 - x_2 y_1)^2}{\Big((z_1 - x_1)^2 + (z_2 - x_2)^2\Big)^2 + g(z_1x_2 - z_2 x_1)^2},$$ and $$v := \frac{\Big((y_1 - z_1)^2 + (y_2 - z_2)^2\Big)^2 + g(y_1z_2 - y_2 z_1)^2}{\Big((z_1 - x_1)^2 + (z_2 - x_2)^2\Big)^2 + g(z_1x_2 - z_2 x_1)^2}.$$ We need to prove that $$u^{1/4} + v^{1/4} \ge 1.\tag{1}$$ It suffices to prove that, for all $a, b > 0$ and $x_1, x_2, y_1, y_2, z_1, z_2 \in \mathbb{R}$ , $$\frac{u}{a^3} + 3a + \frac{v}{b^3} + 3b \ge 4.\tag{2}$$ ( Note : If (2) is true, letting $a = u^{1/4}$ and $b = v^{1/4}$ , we have $u^{1/4} + v^{1/4} \ge 1$ . ) (2) is written as \begin{align*} &\frac{1}{a^3}\Big((x_1 - y_1)^2 + (x_2 - y_2)^2\Big)^2 + \frac{1}{b^3}\Big((y_1 - z_1)^2 + (y_2 - z_2)^2\Big)^2 \\[6pt] &\quad - (4 - 3a - 3b)\Big((z_1 - x_1)^2 + (z_2 - x_2)^2\Big)^2\\[6pt] \ge{}& g\Big[ (4 - 3a - 3b)(z_1x_2 - z_2 x_1)^2 - \frac{1}{a^3}(x_1y_2 - x_2 y_1)^2 - \frac{1}{b^3}(y_1z_2 - y_2 z_1)^2 \Big]. \tag{3
|geometry|inequality|
0
Why does the equation $x = 1$ represent a line in a 2-dimensional coordinate system?
I'm posting a question because I was curious about something while studying linear algebra. As we all know, $x = 1$ is a point in a one-dimensional coordinate system. I understand this part. But why does $x = 1$ represent a line in two dimensions? Is it simply defined that way? If anyone knows anything about this, please help. It may seem trivial, but I feel really uncomfortable and think about it every day. Thank you.
Your problem probably comes from the fact that you don't really know what a two-dimensional coordinate system is and how a line in a two-dimensional coordinate system is mathematically defined. It is linear algebra closely associated with elementary geometry. First we define $$E=\mathbb R \times \mathbb R=\{(x,y): x\in \mathbb\ R,y\in \mathbb R \}$$ We multiply an element $\lambda$ of $\mathbb R$ by an element $(x,y)$ of E like this $$\lambda(x,y):=(\lambda x,\lambda y)$$ For example $$5(0,1)=(5\times 0,5\times 1)=(0,5)$$ We're adding two elements of $E$ like this $$(\color{green}1,0)+(\color{green}0,5):=(\color{green}{1+0},0+5)$$ You can think of an element of E as a point or a vector whichever makes the most sense to you. Then, we define à line in $E$ by a part of $E$ of the form $$l=a+\mathbb R u$$ $\mathbb R u$ is called the direction of $l$ . Here, with $a=(1,0)$ and $u=(0,1)$ , you obtain $$l=(1,0)+\mathbb R (0,1)=\{(x,y)\in \mathbb R \times \mathbb R: \exists \lambda \in \mathbb
|coordinate-systems|
1
The operation $ (a,b)(c,d)=(ac-bd,ad+bc) $ on $\Bbb R\times\Bbb R\backslash (0,0)$ yields a group
Here is the binary operation $ *: \mathbb{R}\times \mathbb{R} \backslash (0,0) $ defined by $ (a,b)(c,d)=(ac-bd,ad+bc) $ . My idea is that to show this is a group ( $\mathbb{R}\times \mathbb{R} \backslash (0,0), * $ ), I need to show that $ * $ is well-defined and associative and then show it has an identity and inverse. I am struggling to do the first part. How do I show $ * $ is well-defined (and is the first part required)? Is showing that $ ac-bd=0,ad+bc=0 $ will only be true if $a=b=c=d=0$ sufficient?
You have to prove that: $$((a\ne0)\vee (b\ne0))\wedge ((c\ne0)\vee (d\ne0)) \Longrightarrow (ac-bd\ne0)\vee (ad+bc\ne0)$$ which is true iff (contrapositive): $$(ac-bd=0)\wedge (ad+bc=0)\Longrightarrow\neg((a\ne0)\vee(b\ne0))\vee \neg((c\ne0)\vee (d\ne0))$$ namely: $$(ac-bd=0)\wedge (ad+bc=0)\Longrightarrow((a=0)\wedge(b=0))\vee ((c=0)\wedge (d=0))$$ which in turn is true by @TravisWillse's hint, because for general propositions $P$ and $Q$ , $P\wedge Q\Longrightarrow P\vee Q$ .
|abstract-algebra|group-theory|functions|binary-operations|
0
Find an equivalence relation over all of $\mathbb{Z}$ which has infinitely many equivalence classes with infinitely many elements in each
I want to find an equivalence relation defined on all integers (that is, all of $\mathbb{Z}$ ) where The equivalence relation partitions $\mathbb{Z}$ into infinitely many equivalence classes; and Every equivalence class contains an infinite number of elements. I've been thinking about this interesting question for a while, and I have come up with two ideas which are close, but not complete solutions. My first idea was to define the equivalence relation $\sim$ where $x \sim y \iff x = \pm p^m, y = \pm p^n$ for some prime number $p$ , and some integers $m, n$ . This will certainly create infinitely many equivalence classes where each equivalence class will essentially contain all powers of one prime number (positive or negative). Since there are infinitely many prime numbers, there will be infinitely many equivalence classes, and each equivalence class will contain infinitely many elements. However, there are two problems with this idea: firstly, $1$ and $-1$ , which are equal to $\pm p^
I will give 2 Solutions , though there are variations & tweaks which can give more Solutions. SOLUTION 1 : Using the Decimal representation of some transformation function $f(n)$ involving reciprocals. Consider the rational number $1/(9+|n|)$ : It will have a Non-Zero Decimal representation. Due to Property of rational numbers , there will eventually follow repeating Digits in that representation. CORE IDEA : Take the repeating Digits to be the Equivalence Class for Integer $n$ . We see that , we will have infinite Equivalences Classes. Move-over , we will have infinite count in all Equivalences Classes. Here is a listing : $n=0$ , $1/(9+0)=0.\color{orange}{1}111\cdots$ : $0 \approx [1]$ $n=\pm1$ , $1/(9+1)=0.1\color{orange}{0}00\cdots$ : $\pm1 \approx [0]$ $n=\pm2$ , $1/(9+2)=0.\color{orange}{10}101010\cdots$ : $\pm2 \approx [10]$ $n=\pm3$ , $1/(9+3)=0.08\color{orange}{3}333\cdots$ : $\pm3 \approx [3]$ $n=\pm4$ , $1/(9+4)=0.0\color{orange}{769230}76923076923076923076923\cdots$ : $\pm4
|discrete-mathematics|examples-counterexamples|equivalence-relations|integers|infinitary-combinatorics|
0
Doubt in finding limit of fog(x)
I know that if $\lim_{x\to a}g(x)=m$ , then $\lim_{x→a}f(g(x))=f(m)$ , if $f$ is continuos at $m$ My doubt is why does $f$ have to be continuos because we can find $f(m)$ even if it's discontinuous at $m$
In general, $\lim_{x \to a} f(x) \neq f(a)$ , because $f$ may not be defined at $x = a$ (if $f$ is not continuous at $a$ ). Continuity is a necessary and sufficient condition for $\lim_{x \to a} f(x) = f(a)$ to hold. For example, consider the function $g(x) = x$ and $f(x) = \frac{\sin x}{x}$ . $\lim_{x \to 0} g(x) = 0$ $\lim_{x \to 0} f(g(x)) = \lim_{x \to 0} \frac{\sin x}{x} = 1$ But, $f$ is not defined at $0$ .
|limits|
1
help in figuring out a step in sum of a series
We have to find $$\sum_{k=0}^\infty kq^k$$ for $|q| . The solution done by my professor was the following. $$\begin{align*} S&=\sum_{k=0}^\infty kq^k=\sum_{k=1}^\infty kq^k=q\sum_{k=1}^\infty kq^{k-1}\\ &=q\sum_{k=1}^\infty q^{k-1}+q\sum_{k=1}^\infty(k-1)q^{k-1}\\ &=q\sum_{k=1}^\infty q^{k-1}+q\sum_{k^\prime=1}^\infty k^\prime q^{k^\prime}\\ &=q\sum_{k=1}^\infty q^{k-1}+q\cdot S \end{align*}$$ where we substituted $k^\prime=k-1$ in the penultimate line. I didn't get how he added $q\sum_{k=1}^\infty (k-1)q^{k-1}$ in the second line. For my background, I am in my first year of computer science engineering, and I have done Analysis II.
He did not add it. Note that $$kq^{k-1}=(\color{green}{k-1}+\color{blue}{1})q^{k-1}=(\color{green}{k-1})q^{k-1}+\color{blue}{1}q^{k-1}$$ Therefore, $$\sum_{k=1}^\infty kq^{k-1}=\sum_{k=1}^\infty(k-1)q^{k-1}+\sum_{k=1}^\infty q^{k-1}$$ and so $$q\sum_{k=1}^\infty kq^{k-1}=q\sum_{k=1}^\infty(k-1)q^{k-1}+q\sum_{k=1}^\infty q^{k-1}$$ Hope this helps. :)
|sequences-and-series|
1
Proving $\phi(x) = n$ for natural values of $n$ has finite solutions
Question : Prove that for any given $n \in \mathbb N$ , $\phi(x) = n$ has only finitely many solutions. My attempt: Let there be infinite solutions for $$\phi(x) = n$$ so every natural number greater than $x$ has only $n$ primes before it. Thus, every number after the last prime is composite. Now, let the finite set of primes be $p_1,p_2,...,p_n$ . Now, the number $p_1p_2...p_n + 1$ is relatively prime to all the finite primes, thus it is a new prime This contradiction violates our assumption of a finite set of primes, so our assumption of infinite solutions is wrong. Hence, $\phi(x) = n$ has only finite solutions. Issue : I feel that the approach of $p_1p_2...p_n + 1$ is wrong and doesn't prove that the number itself is prime, but since it is of form $1 \text{ mod } p_i$ for $i \epsilon \text{ 1,2,...,n }$ , thus it must be prime to all of them. Also, something about this proof feels off in terms of the assumptions. Any help is very appreciated! PS : I know this is a duplicate but sin
Edit: new answer due to change in OP question Suppose $x$ has the following factorization: $$x = \prod_{p|n}p^{k_p}$$ Using the Euler's product formula, we have $$\phi(x) = \prod_{p|n}p^{k_p-1}(p-1)$$ Therefore $$\frac{\phi^2(x)}{x} = \prod_{p|n}p^{k_p-2}(p-1)^2 \geq \prod_{p|n}\frac{(p-1)^2}{p}$$ since $k_p \geq 1$ . Notice that $\dfrac{(p-1)^2}{p} > 1$ for $p >2$ , therefore: $$\frac{\phi^2(x)}{x} \geq \frac{1}{2} \Rightarrow \phi(x) \geq \sqrt{\frac{x}{2}}$$ If $\phi(x)=n$ then $n\geq\sqrt{\dfrac{x}{2}}$ therefore $x\leq2n^2$ . Thus there can only be a finite number of solutions to $\phi(x) = n$
|elementary-number-theory|totient-function|
1
How to calculate the Volume of an Elliptic Truncated Cone?
I have been attempting to find the volume of an Elliptic truncated cone by dividing it into cross-sections of elliptical cylinders and then stacking them up. I got the idea from the integration of the truncated cone but am not able to continue with the method as there are 3 variables involved. Could someone please guide me through the process that I am supposed to do? The cone: The cross section:
Extend the sides of the cone upward in your imagination until they meet and form a phantom "full cone" and a phantom "small cone" on top of the elliptic cone frustum. The volume of the frustum is the volume of the phantom "entire cone" minus the volume of the phantom "small cone". Here are examples with other frusta:
|calculus|integration|differential-geometry|volume|elliptic-equations|
0
How exactly does Euler notation work for complex numbers?
When textbooks say $e^{i\theta} = \cos\theta + i\sin\theta$ , is the $e$ actually Euler's number, 2.71828..., or is this purely a symbolic equivalent notation where the $e$ does not represent this number? If it is the actual number, why is it used rather than another, more intuitive number? Does $\theta$ have to be measured in radians or can it be measured in either radians or degrees? Does the formula work when an angle is larger than 2 $\pi$ and goes around the circle over and over? If so, why do the larger values of $\theta$ in the exponent not increase the distance the point is from the origin?
$e$ is the same $e$ you see in natural logarithm bases and whilst $\theta$ should ideally be between $0$ and $2\pi$ for basic examples, you can choose larger or smaller values but radians must be used. There are a few nice proofs of this formula to be found by doing a quick search online. The larger (or smaller) values of $\theta$ do not affect the distance from the origin because both $\sin{\theta}$ and $\cos{\theta}$ are oscillating (periodic) functions so the $x$ and $y$ or real and complex parts will always remain in the range $[-1,1]$ . More specifically, the identity $\sin^{2}{\theta}+\cos^{2}{\theta}=1$ holds true for all values of $\theta$ so the point will always be a distance of 1 from the origin unless a dilation factor $r$ is applied to make $r\times e^{i\theta}=r(\cos{\theta}+i\sin{\theta})$ in which case the point will always be at a distance of $r$ from the origin.
|complex-numbers|
0
How exactly does Euler notation work for complex numbers?
When textbooks say $e^{i\theta} = \cos\theta + i\sin\theta$ , is the $e$ actually Euler's number, 2.71828..., or is this purely a symbolic equivalent notation where the $e$ does not represent this number? If it is the actual number, why is it used rather than another, more intuitive number? Does $\theta$ have to be measured in radians or can it be measured in either radians or degrees? Does the formula work when an angle is larger than 2 $\pi$ and goes around the circle over and over? If so, why do the larger values of $\theta$ in the exponent not increase the distance the point is from the origin?
More likely, this 'Euler's identity' is just a notation in complex analysis. I personally would not put this in real analysis branch. In complex world, you should forget most of the stuff you have learnt in real case. The $e^z$ here is defined similar to the real's definition, using Euler's limit definition, but with complex limit (at least when Im learning complex analysis) $$\lim_{n\to+\infty}\left(1+\dfrac{z}{n}\right)^n=e^z$$ And the complex trigonometric function is defined as $$\cos z=\dfrac{e^{iz}+e^{-iz}}{2}\quad\sin z=\dfrac{e^{iz}-e^{-iz}}{2i}$$ One preliminary result is that they coincide with the real definition, so we are happy with that. You can try to express $e^z$ with the use of $\cos z$ and $\sin z$ , which you should get the following results $$\cos z+i\sin z=e^{iz}$$ This $z$ here can be arbitrary complex number, which does not restrict to $(0,2\pi)$ .
|complex-numbers|
0
Finding an equation of a plane passing through the origin with cylinder such that the intersection is a circle.
I have the following question here... Find an equation of a plane through the origin such that the intersection between the plane and the elliptical cylinder $\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1$ is a circle. Wouldn't this simply just be $ax+by=0$ ??? The plane has to pass the origin, so $d=0$ and since the plane has to be perpendicular to the cylinder to get circular cross section, we get that $z=0$ . This results in $ax+by=0$ . I feel like this is too simple and I am overlooking something.
If for instance $a\ge b$ , take a plane $$z=ky.$$ The intersection with your cylinder will be an ellipse, whose semi-axes $a$ and $b\sqrt{1+k^2}$ are equal iff $$k=\pm \frac{\sqrt{a^2-b^2}}b.$$
|calculus|quadrics|
0
One of the numbers $\zeta(5), \zeta(7), \zeta(9), \zeta(11)$ is irrational
I am reading an interesting paper One of the numbers ζ(5), ζ(7), ζ(9), ζ(11) is irrational by Zudilin. We fix odd numbers $q$ and $r$ , $q\geq r+4$ and a tuple $\eta_0,\eta_1,...,\eta_q$ of positive integer parameters satisfying the conditions $\eta_1\leq \eta_2\leq...\leq \eta_q and $$ \eta_1+\eta_2+...+\eta_q\leq \eta_0\left(\frac{q-r}{2}\right)\tag{1}$$ Define $$F_n:=\frac{1}{(r-1)!}\sum_{t=0}^\infty R_n^{(r-1)}(t)\tag{2}$$ and note that $R_n(t)=O(t^{-2})$ . We put $m_j=\max\{\eta_r,\eta_0-2\eta_{r+1},\eta_0-\eta_1-\eta_{r+j}\}$ for $j=1,2,...,q-r$ and define the integer $$\Phi_n:=\prod_{\sqrt{\eta_0 n} where only primes enter the product and $$\varphi(x)=\min_{0\leq y where [.] denotes the ceiling function. Let $D_N$ denote the lcm of $1,2,...,N$ . Lemma $1$ : ( $2$ ) defines a linear form of $\zeta(r+2),\zeta(r+4),...,\zeta(q-2)$ with rational coefficients; moreover, $$ D_{m_1n}^r D_{m_2n... D_{m_{q-r}n}}.\Phi_n^{-1}.F_n\in\mathbb{Z}+\mathbb{Z}\zeta(r+2)+\mathbb{Z}\zeta(r+4)+...+\
Too long for a comment. I have entered the code given by @davidlowryduda on sagemath as: # This is sage code, for the Sagemath computer algebra system. # It's very similar to python, but with extra batteries included. etas = [91, 27, 27, 27, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38] r = 3 q = 13 term1(x) = 1 term2(x) = 1 for eta in etas: term1 *= (x - eta) term1 *= (x - etas[0])^2 for eta in etas[1:]: term2 *= (x - etas[0] + eta) term2 *= x^3 f = term1 - term2 f = CC[x](f) # interpret f as a polynomial over complex numbers roots = f.roots() # print(roots) cand_roots = [r for r, deg in roots if imag(r) > 0] cand_roots.sort(key = lambda z: real(z)) tau0 = cand_roots[-1] def f0(tau): ret = r * etas[0] * log(etas[0] - tau) for j in range(1, q + 1): term = etas[j] * log(tau - etas[j]) term -= (etas[0] - etas[j]) * log(tau - etas[0] + etas[j]) ret += term for j in range(1, r+1 ): term = 2 * etas[j] * log(etas[j]) ret -= term for j in range(r + 1, q+1): term = (etas[0] - 2 * etas[j]) * log(etas
|number-theory|analytic-number-theory|riemann-zeta|computational-mathematics|computational-number-theory|
0
why cant i integrate the derivative of a cylinders volume to find it again?
For a cylinder, the volume formula is given by $V = \pi r^2 H $ . To find the differential ( dV ), we differentiate ( V ) with respect to ( r ) and ( H ), yielding $\left( dV = 2 \pi r H dr + \pi r^2 dH \right)$ . Upon integrating ( dV ) again, we obtain $( 2 \pi r^2 H )$ . However, I'm confused about where the factor of two originates. Additionally, in calculus, it's taught that to derive the formula for the volume of a cylinder, we perform a double integral of ( da ) over the cylinder's surface area ( A ) and ( dh ) over the height ( H ). This suggests that $( V = \iint 2 \pi r dr dh )$ , or equivalently $( V = \int \pi r^2 dH )$ , which only includes the second term of the earlier ( dV ) expression. Does this imply that the first term is zero?
Before getting into differentials, let's talk about that calculus formula, $V = \iint 2 \pi r\,\mathrm dr \,\mathrm dh$ . How do we know that this gives the volume of a cylinder? I think it is easier to justify the formula $$V = \iiint_S f(r,\theta,h)\, r\,\mathrm d\theta \,\mathrm dr \,\mathrm dh$$ where $f(r,\theta,h)=1$ , which is the integral of the constant function $1$ over the interior of a cylinder $S$ in cylindrical coordinates and $r$ is the Jacobian of the coordinate transformation. (That is, the volume element is $\mathrm dV = r\,\mathrm d\theta \,\mathrm dr \,\mathrm dh$ .) We choose the cylindrical coordinates such that the axis of the cylinder $S$ is the axis of the coordinate system, so the integral over $\theta$ can simply run from $0$ to $2\pi$ for every $r$ and $h$ . Then by doing the integration over $\theta$ first we reduce the integral to $V = \iint 2 \pi r\,\mathrm dr \,\mathrm dh$ . The point here is that $r$ and $h$ in this integral are not the radius and heigh
|calculus|geometry|volume|
1
How exactly does Euler notation work for complex numbers?
When textbooks say $e^{i\theta} = \cos\theta + i\sin\theta$ , is the $e$ actually Euler's number, 2.71828..., or is this purely a symbolic equivalent notation where the $e$ does not represent this number? If it is the actual number, why is it used rather than another, more intuitive number? Does $\theta$ have to be measured in radians or can it be measured in either radians or degrees? Does the formula work when an angle is larger than 2 $\pi$ and goes around the circle over and over? If so, why do the larger values of $\theta$ in the exponent not increase the distance the point is from the origin?
Yes, $e$ is the same value of $e$ that is called Euler's number. There isn't any "intuitive" reason I can think of why another number would work in this equation. What kind of number could be the "intuitive" one to put in this equation? In fact, the logarithm with base $e$ is called the natural logarithm because the powers of $e$ have particularly nice properties. You might even say that $e$ is the "natural" base for exponentiation, since exponentiation and logarithms are so closely related. In that sense, $e$ is the most "intuitive" number to find raised to a power in an equation, if you had to make a wild guess as to what number to raise to a power. Yes, you need to assume $\theta$ is in radians when using the formula $e^{i\theta} = \cos\theta + i\sin\theta$ , including the assumption that both of the trig functions expect their input values to be in radians. That's because $e^{i\pi} = \cos\pi+ i\sin\pi = -1$ whereas $e^{i180} \neq -1$ . No, you don't get any farther from the origin
|complex-numbers|
0
What is $\prod _{j=1}^n \left(\sqrt{j}+1\right)$?
By the Fundamental Theorem of Algebra, it is easily seen that for a monic polynomial $p(x) \in \mathbb{C}[x]$, $$\prod _{j=1}^n p(j) = \frac{\prod_{p(r)=0}\Gamma(1+n-r)}{\prod_{p(r)=0}\Gamma(1-r)},$$ where the products in the right-hand side of the above equality are over all roots $r$ of $p(x)$. For example, we have that $$\prod_{j=1}^{n} (j^2+1) = \frac{\Gamma (1+n-i) \Gamma (1+n+i)}{\Gamma (1-i) \Gamma (1+i)},$$ letting $i$ denote the imaginary unit. Mathematica is not able to symbolically evaluate the product $$\prod _{j=1}^n \left(\sqrt{j}+1\right),$$ and it is not obvious to me as to how to evaluate the above product in terms of special functions such as the gamma function. It is natural to ask: (1) Is there a known way of evaluating the product $\prod _{j=1}^n \left(\sqrt{j}+1\right)$ in terms of special functions such as the gamma function? (2) More generally, is there a known way of evaluating products of the form $\prod _{j=1}^n a(j)$ in terms of special functions such as the
Trying to compare to the continuous case, it seemed interesting since $$\int \log \left(\sqrt{j}+1\right)\,dj=\sqrt{j}-\frac{j}{2}+(j-1) \log \left(\sqrt{j}+1\right)$$ Using Euler-MacLaurin summation formula $$\log\left(\prod _{j=1}^n \left(\sqrt{j}+1\right)\right)=C+\frac{1}{2} n (\log (n)-1)+2 \sqrt{n}-\frac 14 \log(n)-$$ $$\frac{1}{6 \sqrt{n}}\left(1-\frac{1}{4 \sqrt{n}}+\frac{1}{20 n}-\frac{1}{140 n^2}+\frac{1}{120 n^2\sqrt n}+O\left(\frac{1}{n^3}\right)\right)$$ For this level of expansion $$C=\frac{\log (2)}{2}-\frac{88974215977}{87199580160}$$ For $n=100$ , this truncated series gives $198.41718$ while the exact value is $198.41707$ .
|gamma-function|products|
0
How to find the circle of curvature centre
For the following question I am asked to find the radius for circle of curvature for the function: $-0.18e^{4.88x}$ I found the radius 1.681459915, by using the formula: R =1/ρ and this was correct however when my answers for finding the centre is incorrect and I am unsure why. It is close though. I have attached a photo of my working out but I will also note it here. I used the forumula: (x+R(dy/dx), y+R) and from that I got: (-0.1902818505, 1.65590175) the correct answers are: (x,y) = (-.0608106, -1.69409) Any explanation as to how these correct answers were found and where I went wrong? am I using the wrong forumla? I have gotten 4 of the same questions wrong and I am stumped.
We have $f(x,y) = y + 0.48 e^{4.88 x} = 0$ and the normal to that function at the point $p=(-0.40,-0.48 e^{4.88 (-0.40)})$ is $\vec v = (0.8784e^{4.88(-0.40)},1)$ then the circle's center is located at $$ (x_0,y_0)= p - R\frac{\vec v}{\|\vec v\|} = (-0.4,-0.0255582)-1.681459915(0.123765, 0.992312) = (-0.608106, -1.69409) $$ In red $-\frac{\vec v}{\|\vec v\|}$ in blue $f(x,y)=0$ and in dotted black, the osculating circle: the black dot it's center $(x_0,y_0)$ and the selected point in red.
|calculus|circles|curvature|
0
Tempered distribution in Sobolev sapce
Let $f \in L^2(\mathbb R^n)$ and $u \in \mathcal S'(\mathbb R^n)$ , where $\mathcal S'$ denote the tempered distribution space. I want to show that if $$ u - \Delta u = f \ \text{ in }\ \mathcal S', $$ then $u \in H^2(\mathbb R^n)$ , which means, $$ \| (1+|\xi|^2) \hat u(\xi)\|_{L^2} = \int_{\mathbb R^n} \big| (1+|\xi|^2) \hat u(\xi) \big|^2 d\xi I have shown that $\mathcal F[(1+|\xi|^2) \varphi(x)] = (1-\Delta)\hat\varphi$ for any $\varphi \in \mathcal S$ . Thus $$ \langle (1+|\xi|^2) \hat u, \varphi \rangle = \langle \mathcal F[(1-\Delta) u], \varphi \rangle = \langle \hat f, \varphi \rangle, \quad \forall \varphi \in \mathcal S. $$ But I have no idea how to show the rest. If, at least, $\hat u$ is a locally integrable function, we can conclude that $\| (1+|\xi|^2) \hat u(\xi)\|_{L^2} = \| \hat f \|_{L^2} , since $(1+|\xi|^2) \hat u(\xi) = \hat f$ a.e. by the fundamental lemma of calculus of variations. But for $u$ that is a tempered distribution, how to deal with it?
Consider a smooth function $\psi$ which is supported in $B(0,1)$ , it is positive and has integral $1$ , and set $\psi_m(x)=m\psi(mx)$ . Consider now the convolutions to two sides of the equation, $$ u_m -\Delta u_m = f_m, $$ where $u_m = u*\psi_m$ , and $f_m = f*\psi_m$ . We have $$ f_m \to f, \qquad \text{in $L^2$ and in $\mathcal{S}'$} $$ and $$ u_m \to u, \qquad \text{in $\mathcal{S}'$}. $$ Since $\|u_m\|_{H^2} \le \|f_m\| \le C(\|f\| + 1)$ , we have $u_m \to u$ in $L^2$ , and in $H^2$ weakly. (If necessary, we choose a subsequence of $\{m\}$ .) The problem is similar as the following answer. Relation between distribution and $L^p$
|functional-analysis|lp-spaces|sobolev-spaces|distribution-theory|
0
How exactly does Euler notation work for complex numbers?
When textbooks say $e^{i\theta} = \cos\theta + i\sin\theta$ , is the $e$ actually Euler's number, 2.71828..., or is this purely a symbolic equivalent notation where the $e$ does not represent this number? If it is the actual number, why is it used rather than another, more intuitive number? Does $\theta$ have to be measured in radians or can it be measured in either radians or degrees? Does the formula work when an angle is larger than 2 $\pi$ and goes around the circle over and over? If so, why do the larger values of $\theta$ in the exponent not increase the distance the point is from the origin?
To directly addresses the last question: If so, why do the larger values of θ in the exponent not increase the distance the point is from the origin? Because that's how exponentiation works with complex arguments, when you change the complex part of the number (and as $\theta$ is multiplied by $i$ that is what happens here, for $\theta\in\mathbb R$ ) you just go around, to get further from (or closer to) the origin you have to change the real part of argument.
|complex-numbers|
0
On the value of $x$ for which a point mass falls off a curve.
Not sure if this an appropriate venue for this question, please close question as opposed to migrating to Physics SE, because it is not appropriate there, thanks. If we have a functional given by: $$I=\int_1^2\bigg(L+\lambda f\bigg)\;dt,\tag{1}$$ the Euler-Lagrange equations are given by: $${d\over dt}\bigg({\partial L\over\partial \dot x}\bigg)-{\partial L\over \partial x}=-\lambda{\partial f\over\partial x}.\tag{2}$$ The Lagrangian is $$L={m\over 2}(\dot x^2+\dot y^2)-mgy,\tag{3}$$ $g\gt 0$ and the constraint $$f=y+\cosh x-2=0.\tag{4}$$ We may write the the Lagrangian in terms of the constraint as: $$L={m\over 2}(\dot x^2+\sinh^2 x)-mg(2-\cosh x).\tag{5}$$ This problem concerns finding the point at which a point mass "falls off" a surface defined via the constraint. It is desired to find the value of $x$ such that $\lambda=0$ . However my attempt at solving the problem ends at computing the inverse hyperbolic cosine, which is of course undefined. So it seems there is an error? Perhap
An alternative way. Considering the Lagrangian $$ L={m\over 2}(\dot x^2+\dot y^2)-mgy+\lambda f(x,y),\ \ \ \ f(x,y) = y+\cosh x-2=0 $$ The movement equations are $$ \cases{ m x''+\lambda \sinh x = 0\\ m y'' + \lambda + gm = 0 } $$ now deriving $f$ two times regarding $t$ we obtain $$ \sinh x x''+y''+\cosh x x'^2=0 $$ Solving $$ \cases{ m x''+\lambda \sinh x = 0\\ m y'' + \lambda + gm = 0\\ \sinh x x'' + y''+ \cosh x x'^2=0 } $$ for $(x'', y'', \lambda)$ we obtain $$ \left\{ \begin{array}{l} x''=\frac{g \sinh (x)+x'^2 \sinh (x) \cosh (x)}{\sinh ^2(x)+1} \\ y''=-\frac{g \sinh ^2(x)+x'^2 \cosh (x)}{\sinh ^2(x)+1} \\ \lambda =-\frac{g m-m x'^2 \cosh (x)}{\sinh ^2(x)+1} \\ \end{array} \right. $$ Attached a MATHEMATICA script performing a simulation. parms = {m -> 1, g -> 9.81, y0 -> 2 - Cosh[1/4], x0 -> 1/4}; p = {x[t], y[t]}; L = m/2 D[p, t] . D[p, t] - m g p . {0, 1} - lambda (y[t] + Cosh[x[t]] - 2); mov = D[Grad[L, D[p, t]], t] - Grad[L, p]; solxy = Solve[Thread[Join[mov, {D[y[t] + Cosh[
|physics|lagrange-multiplier|constraints|euler-lagrange-equation|
0
Is $\inf\left\{t\in\left[0,1\right]\vert t+B^2_t=1\right\}$ a stopping time?
Problem Let $\left(\Omega,\mathcal{F},(\mathcal{F}_t)_{t\geq 0},\mathbb{P}\right)$ be a filtered probability space such that $(\mathcal{F}_t)_{t\geq 0}$ is a complete and right-continuous filtration and let $B=(B_t)_{t\geq 0}$ be a standard $\mathbb{R}$ -valued $(\mathcal{F}_t)_{t\geq 0}$ -Brownian motion. Is $S:=\inf\left\{t\in\left[0,1\right]\vert t+B^2_t=1\right\}$ a stopping time? Similiar Problems The definition of a stopping time doesn't seem to be immediately useful here: $T:\Omega\to\mathbb{R}_+\cup\left\{\infty\right\}$ is called a stopping time if $\left\{T\leq t\right\}\in\mathcal{F}_t$ for all $t\geq 0$ . Normally, I make use of the fact that $B$ is (a.s.) continuous and: If $(\mathcal{F}_t)_{t\geq 0}$ is a right-continuous filtration and $X=(X_t)_{t\geq 0}$ is an (a.s.) continuous $\mathbb{R}$ -valued adapted process, then $T_A := \inf\left\{t\geq 0\vert X_t\in A\right\}$ is a stopping time for every open or closed Borel set $A\in\mathcal{B}(\mathbb{R})$ . As an example, c
(Edited) Useful theorems Assuming nothing about the filtration: If $X=(X_t)_{t\geq 0}$ is a continuous $\mathbb{R}^d$ -valued $(\mathcal{F}_t)_{t\geq 0}$ -adapted process, then $T_A:=\inf\left\{t\geq 0 \vert X_t ∈ A\right\}$ is a stopping time for every closed Borel set $A\in\mathcal{B}(\mathbb{R}^d)$ . Assuming that the filtration is right-continuous: If $(\mathcal{F}_t)_{t\geq 0}$ is a right-continuous filtration and $X=(X_t)_{t\geq 0}$ is a continuous $\mathbb{R}^d$ -valued $(\mathcal{F}_t)_{t\geq 0}$ -adapted process, then $T_A:=\inf\left\{t\geq 0\vert X_t\in A\right\}$ is a stopping time for every open or closed Borel set $A\in\mathcal{B}(\mathbb{R}^d)$ . Assuming that the filtration is right-continuous and complete: If $(\mathcal{F}_t)_{t\geq 0}$ is a right-continuous and complete filtration and $X=(X_t)_{t\geq 0}$ is an (a.s.) continuous $\mathbb{R}^d$ -valued $(\mathcal{F}_t)_{t\geq 0}$ -adapted process, then $T_A:=\inf\left\{t\geq 0\vert X_t\in A\right\}$ is a stopping time fo
|continuity|stochastic-processes|brownian-motion|stopping-times|filtrations|
1
Bound of squarefree part of an integer
I am studying the paper DIOPHANTINE EQUATIONS OF THE FORM $F(X) = G(Y)$ - AN EXPOSITION which discusses the result of Erdos and Selfridge. I am unable to understand the highlighted statement ''Clearly each prime factor of each $a_i$ is less than $k$ ." For example, if I take $n=49$ and $k=4$ , then we have $50\cdot51\cdot52\cdot53 = y^2$ and $$50 = 2 \cdot 5^2, \quad \quad 51 = 3\cdot17, \quad \quad 52= 13\cdot2^2, \quad \quad 53= 53.$$ Clearly, $13 \nleq 4.$ What am I misunderstanding here? Please help me.
The highlighted statement only applies if $r:=\prod_{i=1}^k a_i$ is a square, which is not the case in your example. Let $p$ be prime factor of $a_i$ with $p \ge k$ . Then $p$ divides $n+i$ , but does not divide any other of the factors $n+1,..n+k$ . Therefore, $p^2$ does not divide $r$ , and $r$ is not a square, in contradiction to the assumption.
|number-theory|elementary-number-theory|divisibility|factoring|
1
Finding the tangent line to curve to the ellipse $(x-3)^2+\frac{(y-4)^2}{4}=1$ through the origin
I am told to find the two tangent lines to the ellipse that pass through the origin, but have been stuck for far too long with my approach, hence am thinking that my approach may be flawed. Here is what I have so far: If I interpret the ellipse as the level curve of some function $f(x,y)=1$ , then I can use the fact that the gradient vector is perpendicular to the ellipse at every point $(a,b)$ on it. Computing the partials, I get that the gradient vector at any point $(a,b)$ is $$\nabla f(a, b) = \left( 2(a-3), \frac{(b-4)}{2}\right),$$ thus, the tangent vector at $(a,b)$ is $$\left(-\frac{(b-4)}{2}, 2(a-3)\right),$$ thus the equation of any tangent line to the ellipse passing through the origin is $$k\left(-\frac{(b-4)}{2}, 2(a-3)\right), k\in \mathbb Z.$$ However, I really don't get how I'm supposed to find... another tangent line? Have I made an oversight in one of the steps of my reasoning? To resolve this, I tried also tried the approach of parametrizing the ellipse into a vector
From your first method, you know that a vector of the form $$\vec t = \left(-\frac{1}{2} k(b-4), 2k(a-3) \right)$$ is tangent to the ellipse at the point $(a,b)$ . Such a tangent will pass through the origin if there exists a $k \ne 0$ such that $(a,b) = \vec t$ . The reason is because if you imagine drawing such a tangent from the origin, it will pass through $(a,b)$ and be parallel to the vector $\vec t$ . So for a suitable scaling constant $k$ , the vector $(a,b)$ will be equivalent to $\vec t$ . Moreover, we require $(a,b)$ to be on the ellipse. Hence we require solutions of the system $$\begin{align} -\frac{1}{2} k(b-4) &= a, \\ 2k(a-3) &= b, \\ (a-3)^2 + \frac{(b-4)^2}{4} &= 1. \end{align}$$ This gives us $$(a,b,k) \in \left( \frac{4}{13}(9 \pm \sqrt{3}), \; \frac{12}{13}(4 \mp \sqrt{3}), \; \pm 2 \sqrt{3} \right)$$ where the signs must be taken either $(+,-,+)$ or $(-,+,-)$ .
|linear-algebra|multivariable-calculus|conic-sections|parametrization|
1
The operation $ (a,b)(c,d)=(ac-bd,ad+bc) $ on $\Bbb R\times\Bbb R\backslash (0,0)$ yields a group
Here is the binary operation $ *: \mathbb{R}\times \mathbb{R} \backslash (0,0) $ defined by $ (a,b)(c,d)=(ac-bd,ad+bc) $ . My idea is that to show this is a group ( $\mathbb{R}\times \mathbb{R} \backslash (0,0), * $ ), I need to show that $ * $ is well-defined and associative and then show it has an identity and inverse. I am struggling to do the first part. How do I show $ * $ is well-defined (and is the first part required)? Is showing that $ ac-bd=0,ad+bc=0 $ will only be true if $a=b=c=d=0$ sufficient?
Well-definedness means, as you said, that $(ad-bc,ac+bd)\ne(0,0)$ if $(a,b)\ne(0,0)$ and $(c,d)\ne(0,0)$ . Suppose $(c,d)\ne(0,0)$ . Then we need to prove that $$ ac-bd=0 \qquad ad+bc=0 $$ has only the trivial solution $a=b=0$ . This is a linear system with determinant $c^2+d^2$ , which is nonzero. Note that, essentially the same linear system, in the form $$ ac-bd=c \qquad ad+bc=d $$ provides the identity, the only solution being $a=1$ and $b=0$ . Also the inverse comes from the linear system $$ ac-bd=1 \qquad ad+bc=0 $$ will give you the inverse of $(c,d)$ and Cramer's rule provides $$ a=\dfrac{c}{c^2+d^2} \qquad b=\dfrac{-d}{c^2+d^2} $$
|abstract-algebra|group-theory|functions|binary-operations|
0
How did Artin discover the function $f(x)=\frac{(x^2-x+1)^3}{x^2(x-1)^2}$ with the properties $f(x)=f(1-x)=f(\frac{1}{x})$?
In Artin's "Galois Theory" P38, he said the function $$f(x) = \frac{(x^2 - x + 1)^3}{x^2(x-1)^2}$$ satisfies the properties of $f(x)=f(1-x)=f(\frac{1}{x})$ . Is the function given by some rational step or just by a flash of insight? If $f(0)$ is a number, then $f(0) = f(\frac{1}{0})$ . So that the domain of definition of f(x) does not include 0.(maybe. I know it's not rigorous) Then the domain of definition of f(x) does not include 1 either. Thus I think it is a function like $f(x)=\frac{g(x)}{x^a(x-1)^bh(x)}, h(0)*h(1) \neq 0$ . Then I tried $a=1, b=1$ , failed. but $a = 2, b = 2$ succeed. However, I think that's a really weird way to go about it. Does the question like" $f(x)$ is a rational function that satisfies the properties of $f(x) = f(g_1(x)) = f(g_2(x)) = ... =f(g_n(x)). \forall k \in \mathbb N^+, g_k(x)$ is a rational function. Now give a example of f(x)." has an easy way to solve?
The rational complex functions can be transformed by the group of linear fractional Moebius transformations $$w =\frac{\alpha z + \beta}{\gamma z + \delta}$$ given $$\det\left( \begin {array} {cc} a & b \\ c & d \\\end{array} \right) \ne 0.$$ Nesting two Moebius transformations $$w =\frac{\alpha z + \beta}{\gamma z + \delta}, \ v=\frac{\epsilon w + \zeta}{\eta w + \theta}$$ the coefficient matrix is a product of the matrices, and with det =1, it is a group $SU(2,\mathbb C)$ . In order to work within the complex integers for solutions of algebraic equations with rational coefficients, the subgroup $SU(2,\mathbb Z)$ generated by integer tranlsations $$ z\to z+1, \ z\to z+i$$ and the inversion at the unit circle $$ z\to \frac{1}{z}$$ is the main tool to reduce and produce the representations of the root algebra.
|functions|polynomials|galois-theory|rational-functions|
0
Discountinous function has a zero.
Let $f,g: [a,b] \to \mathbb{R}$ where $g$ is continous function, $f+g$ is non-decreasing and $f(a)>0>f(b)$ . Prove that we can find a point $c$ such that $f(c)=0$ . Given that $f+g$ is non-decreasing I know that it has at most countable number of points where it's non continous and all of them are jump discontinuities so left and right limit in each point exists. Given that right and left limit of $f+g$ and of $g$ exists we see that also $f$ has right and left limit in each point. However I'm stuck how to proceed with that proof. Any help will be greatly appreciated.
That's actually an interesting problem. One way is repeating one of the proofs of the IVT. Denote $a_1=a, b_1=b$ , and let $h_1=\frac{a_1+b_1}{2}$ . If $f(h_1)=0$ then we are done. If $f(h_1)>0$ then we denote the interval $[h_1,b_1]$ by $[a_2,b_2]$ , and if $f(h_1) , denote the interval $[a_1,h_1]$ by $[a_2, b_2]$ . In either case, we have $f(a_2)>0>f(b_2)$ . Now let $h_2=\frac{a_2+b_2}{2}$ and repeat the process. If $f(h_2)=0$ we are done, if $f(h_2)>0$ then define $[a_3,b_3]=[h_2,b_2]$ , if $f(h_2) define $[a_3,b_3]=[a_2,h_2]$ . In either case, $f(a_3)>0>f(b_3)$ . Now continue in a similar way. If at some point we have $f(h_n)=0$ then we are done. Otherwise, we have a decreasing sequence of intervals $[a_n,b_n]$ such that $f(a_n)>0>f(b_n)$ , and their lengths tend to zero. By Cantor's intersection theorem, there is a unique point $c\in\bigcap_{n=1}^{\infty}[a_n,b_n]$ , and it satisfies $c=\lim\limits_{n\to\infty}a_n=\lim\limits_{n\to\infty}b_n$ . Now, since $f(a_n)>0$ , for each $n$
|real-analysis|continuity|
1
One of the numbers $\zeta(5), \zeta(7), \zeta(9), \zeta(11)$ is irrational
I am reading an interesting paper One of the numbers ζ(5), ζ(7), ζ(9), ζ(11) is irrational by Zudilin. We fix odd numbers $q$ and $r$ , $q\geq r+4$ and a tuple $\eta_0,\eta_1,...,\eta_q$ of positive integer parameters satisfying the conditions $\eta_1\leq \eta_2\leq...\leq \eta_q and $$ \eta_1+\eta_2+...+\eta_q\leq \eta_0\left(\frac{q-r}{2}\right)\tag{1}$$ Define $$F_n:=\frac{1}{(r-1)!}\sum_{t=0}^\infty R_n^{(r-1)}(t)\tag{2}$$ and note that $R_n(t)=O(t^{-2})$ . We put $m_j=\max\{\eta_r,\eta_0-2\eta_{r+1},\eta_0-\eta_1-\eta_{r+j}\}$ for $j=1,2,...,q-r$ and define the integer $$\Phi_n:=\prod_{\sqrt{\eta_0 n} where only primes enter the product and $$\varphi(x)=\min_{0\leq y where [.] denotes the ceiling function. Let $D_N$ denote the lcm of $1,2,...,N$ . Lemma $1$ : ( $2$ ) defines a linear form of $\zeta(r+2),\zeta(r+4),...,\zeta(q-2)$ with rational coefficients; moreover, $$ D_{m_1n}^r D_{m_2n... D_{m_{q-r}n}}.\Phi_n^{-1}.F_n\in\mathbb{Z}+\mathbb{Z}\zeta(r+2)+\mathbb{Z}\zeta(r+4)+...+\
Not an answer just some speculation about gamma function and Bernstein's polynomial which is of independent interest here but could help . Using fallaciously the inverse function of $f(x)$ : $$f\left(x\right)=e^{x^{4}-\ln\left(\frac{1}{y}\right)}!,x!=\Gamma(x+1),0 Then using Bernstein form it seems we have $\forall x>0$ : $$\lim_{n\to \infty}\sum_{k=0}^{n}\frac{f\left(\frac{k}{n}\right)n!}{k!\left(n-k\right)!}\left(\ln\left(x^{\frac{1}{6}}+1\right)\right)^{k}\left(1-\ln\left(1+x^{\frac{1}{6}}\right)\right)^{\left(n-k\right)}=y!$$ The only advantage I see we have a local inversion of the Gamma function and so the digamma function . Perhaps I'm wrong .
|number-theory|analytic-number-theory|riemann-zeta|computational-mathematics|computational-number-theory|
0
Equivalent of Pauli matrices in 4 dimensions
I would like to decompose the following 4x4 matrix: $$ \mathrm{H} = \begin{pmatrix} a & b & b & 0 \\ b & 0 & 0 & b \\ b & 0 & 0 & b \\ 0 & b & b & (-a+c)\\ \end{pmatrix} $$ in such a way that to computing the exponential of this matrix would have equivalent representation as the generalised Euler's formula $$e^{ia(\hat{n}\cdot\vec{\sigma})} = \Bbb{1}\operatorname{cos}(a)+i(\hat{n}\cdot\vec{\sigma})\operatorname{sin}(x)\tag{1}\label{eq1}$$ with $$ \mathrm {M} = a(\hat{n}\cdot\vec{\sigma}) $$ M being the initial matrix. Where $\vec{\sigma}$ is the so called Pauli vector containing the Pauli matrices as elements, and $\hat{n}$ is the normalised vector with coefficients constituting the decomposition of any 2x2 matrix regarding the Pauli matrices. 1 in the above represents the 2x2 dim unit matrix. Is there a an analogue to spin matrices in 4x4 dim, which can serve as the basis for this decomposition?
The imaginary multiples of the Pauli matrices form the Lie-algebra of the rotation group representation $SU(2,C)$ with $$i \sigma_1 i\sigma_2 -i \sigma_2 i\sigma_1 = i (i \sigma_3 )\dots,. $$ Additionally, their squares are -Id. This algebra is essential in order to sum the exponential series, the even powers an alternating unit matrix and therefore the odd powers the alternating matrix again. This scheme can be extended to an arbitrary euclidean $\mathbb R^n$ , yielding the Clifford algebra $\mathit {Cl}(n)$ with a special representation by a set of anticommuting matrices, squared to partial -Id, a very subtle subset of $GL(2^n)$ , $2^n$ the dimension of the space of subsets of the basis, very different for each $n \ \text{mod 8}$ Since the algebra is independent by unitary transformations of the representation, one never works with special matrix representations, but uses the algebraic rules on the basis elements up to the maximal product length $$1, e_1\dots e_n,\quad e_1 e_2 \dots
|linear-algebra|abstract-algebra|matrices|
0
Two cards are drawn from a well shuffled pack of $52$ cards. Find the probability that one of them is a red card and the other is a queen.
Two cards are drawn from a well shuffled pack of $52$ cards. Find the probability that one of them is a red card and the other is a queen. My Attempt The relevant cards are $26$ red cards and $2$ black queens i.e. in total $28$ cards. I took four cases. Case 1 : One non-queen red card and one red queen The probability would be $$\frac{\binom{24}{1}\times\binom{2}{1}}{\binom{28}{2}}$$ Case 2 : One non-queen red card and one black queen The probability would be $$\frac{\binom{24}{1}\times\binom{2}{1}}{\binom{28}{2}}$$ Case 3 : Two red queens The probability would be $$\frac{\binom{2}{2}}{\binom{28}{2}}$$ Case 4 : One red queen and one black queen The probability would be $$\frac{\binom{2}{1}\times\binom{2}{1}}{\binom{28}{2}}$$ So the required probability $$=\frac{48+48+1+4}{\binom{28}{2}}=\frac{101}{378}$$ Is the above solution correct?
One obvious problem with your solution is that you have $28 \choose 2$ as the denominator. You need to count all the possibilities, and that would be $52 \choose 2$ . Here's one way to solve this question. The requirements set by the question can be met through $3$ separate cases. Case 1. Both cards are red Clearly, one card must be a queen, the other can be anything. In other words, we can have two queens or one queen and one non-queen. So the probability for this case would be $$\frac{{2 \choose 2} + {2 \choose 1} \times {24 \choose 1}}{{52 \choose 2}} = \frac{98}{{52 \times 51}}$$ Case 2. First card is red, second is black The black card must be a queen, the red one can be anything. The probability would be $$\frac{26}{52} \times \frac{2}{51} = \frac{52}{52 \times 51}$$ Case 3. First card is black, second is red Again, the black card must be a queen, the red one can be anything. So the probability would be $$\frac{2}{52} \times \frac{26}{51} = \frac{52}{52 \times 51}$$ The events ar
|probability|probability-theory|solution-verification|
0
Two cards are drawn from a well shuffled pack of $52$ cards. Find the probability that one of them is a red card and the other is a queen.
Two cards are drawn from a well shuffled pack of $52$ cards. Find the probability that one of them is a red card and the other is a queen. My Attempt The relevant cards are $26$ red cards and $2$ black queens i.e. in total $28$ cards. I took four cases. Case 1 : One non-queen red card and one red queen The probability would be $$\frac{\binom{24}{1}\times\binom{2}{1}}{\binom{28}{2}}$$ Case 2 : One non-queen red card and one black queen The probability would be $$\frac{\binom{24}{1}\times\binom{2}{1}}{\binom{28}{2}}$$ Case 3 : Two red queens The probability would be $$\frac{\binom{2}{2}}{\binom{28}{2}}$$ Case 4 : One red queen and one black queen The probability would be $$\frac{\binom{2}{1}\times\binom{2}{1}}{\binom{28}{2}}$$ So the required probability $$=\frac{48+48+1+4}{\binom{28}{2}}=\frac{101}{378}$$ Is the above solution correct?
Your answer can't be correct as you have taken the sample space as $28$ cards, whereas you are drawing from a full pack of $52$ . "one red card" does not exclude that card being a Queen, neither does "the other card is a Queen" exclude it being a red Queen. Also "one red card" doesn't necessarily mean first card, so possible unordered selections are red Queen,red Queen red Queen,black Queen red non-queen, Queen Then, using the hypergeometric distribution (drawing without replacement) $$Pr = \dfrac{\binom22 +\binom21\binom21+ \binom{24}1\binom41}{\binom{52}2}= \frac{101}{1326}$$
|probability|probability-theory|solution-verification|
1
Proving that a rational b-spline is equivalent to some conic section.
I'm pretty sure that we can trace a conic section like a circle exactly using a NURBS curve. The first quadrant of a unit circle, for example, can be traced by: $C(t)=\frac{(1-t)^{2}P_{0}+2t(1-t)P_{1}+2t^{2}P_{2}}{(1-t)^{2}+2t(1-t)+2t^{2}}$ where $P_{0}=\left(0,1\right)$ , $P_{1}=\left(1,1\right)$ , and $P_{1}=\left(1,0\right)$ . But how do I prove that the curve $x^{2} + y^{2} = 1 : 0 \leq x \leq 1$ is equivalent to $C(t)$ ?
Your equation for $C(t)$ can be broken down into equations for $x$ and $y$ separately: $$ x(t) = \frac{1-t^2}{1+t^2} \quad ; \quad y(t) = \frac{2t}{1+t^2} $$ It’s easy to check that $[x(t)]^2 + [y(t)]^2 = 1$ for all $t$ . This means that every point $C(t)= (x(t),y(t))$ lies on the unit circle. Also it’s clear that $0 \le x(t) \le 1$ if $0 \le t \le 1$ . Can you take it from there? The same sort of reasoning will work whenever you have parametric equations and an implicit equation for a conic. In fact, it will work whenever you have parametric equations and an implicit equation for any curve. A rational quadratic curve will never quite cover an entire conic — there will always be at least one point missing. For example, your parametric equation $C(t)$ will never give you the point $(-1,0)$ on the unit circle no matter what parameter value $t$ you use.
|geometry|analytic-geometry|conic-sections|bezier-curve|
0
$\int\frac{1}{x^4+2x^2+3}~dx$
Here's the process which I tried: Let substitution: Substitute $x^{2} = t$ . This creates a new variable $t$ and relates it to $x$ through differentiation ( $2x dx = dt$ ). Adjust the integral: Rewrite the integral in terms of $t$ using the substitution and $dt$ . Solve the resulting integral: This may involve factoring the denominator or using other techniques for integrating rational functions. Substitute back: Replace $t$ with its original definition ( $x^{2}$ ) to obtain the antiderivative in terms of $x
$$\frac{1}{x^4+2x^2+3}=\frac{1}{2\sqrt{3}}\left({\frac{\sqrt{3}-x^2}{x^4+2x^2+3}+\frac{x^2+\sqrt{3}}{x^4+2x^2+3}}\right)=\frac{1}{2\sqrt{3}}\left({\frac{\frac{\sqrt{3}}{x^2}-1}{\left({x+\frac{\sqrt{3}}{x}}\right)^2+2-2\sqrt{3}}+\frac{\frac{\sqrt{3}}{x^2}+1}{\left({x-\frac{\sqrt{3}}{x}}\right)^2+2+2\sqrt{3}}}\right)$$ Therfore: $$\frac{1}{x^4+2x^2+3}=\frac{1}{2\sqrt{3}}(f(x) + g(x))$$ Where : $$f(x)=\frac{\frac{\sqrt{3}}{x^2}-1}{\left({x+\frac{\sqrt{3}}{x}}\right)^2+2-2\sqrt{3}}$$ $$g(x)=\frac{\frac{\sqrt{3}}{x^2}+1}{\left({x-\frac{\sqrt{3}}{x}}\right)^2+2+2\sqrt{3}}$$ we have : $$F(x)=\int \frac{\frac{\sqrt{3}}{x^2}-1}{\left({x+\frac{\sqrt{3}}{x}}\right)^2+2-2\sqrt{3}} dx$$ Let $u:=x+\frac{\sqrt{3}}{x}$ $$F(x)=-\int \frac{du}{u^2-(2\sqrt{3}-2)}=\frac{1}{\sqrt{2\sqrt{3}-2}}\tanh^{-1}\left({\frac{u}{\sqrt{2\sqrt{3}-2}}}\right)$$ therfore: $$F(x)=\frac{1}{\sqrt{2\sqrt{3}-2}}\tanh^{-1}\left({\frac{x+\frac{\sqrt{3}}{x}}{\sqrt{2\sqrt{3}-2}}}\right)+C$$ and we have : $$G(x)=\int \frac{\frac{\
|calculus|integration|
0
Failure rate of two identical item
I have two identical pipelines and the failure rate of a leak in one of them expressed in event/year. The two pipelines are identical and not interconnected. I want to know the failure rate of the situation where I have a leak in both pipelines. Initially I multiplied the frequencies, I asked my boss for help and he said that it is not possible because the unit of measurement cannot be ev^2/y^2. He explained the solution to me, but I don't remember the answer but I'm sure I have to somehow multiply some frequency and some probability. And the other piece of data I need to use is that the estimated life of the plant is 20 years. 20 years is the observation time of the system and should be a necessary data for calculating the probability. I need help, thank you very much!!
If the breakdown rate is $\lambda$ per year then the one year reliability of that item (in its simplest model) is $R=e^{-\lambda}$ . You have a system of two such items in parallel. The one year unreliability of an item is $U=1-R$ and the unreliability of the system is $U^2$ . The one year reliability of this system is $1-U^2$ . You can calculate this figure to give a number between 0 and 1, let's call it $R_s$ . If $\lambda_s$ is the breakdown rate of the system then $R_s=e^{-\lambda_s}$ and taking logs will give you your system breakdown rate. You should read up on reliability and the exponential distribution for more examples.
|probability|logic|reliability|
0
tough exponential product and sum
(Here is a transcription of the image): $$\dfrac{e^{k}-1}{k}x+e^{k}-1=\dfrac{e^{x+k}-1}{e^{x}}$$ enter image description here I'm not sure how to start solving the equation in the figure
Assuming we are solving for $x$ this is how I would approach it Notice $x=0$ is a root to the equation as $\dfrac{e^{k}-1}{k}0+e^{k}-1=\dfrac{e^{0+k}-1}{e^{0}} \implies e^k-1=e^k-1$ Let $f(x)=\dfrac{e^{k}-1}{k}x+e^{k}-1-\dfrac{e^{x+k}-1}{e^{x}}$ $f$ is differentiable with $f'(x)= {{e^k-1} \over k} - \frac{e^xe^x-e^x(e^{x+k}-1)}{e^{2x}} = .......= \frac{e^k-1}{k}+e^k-1 $ . Now that thing is just a number since our variable is $x$ so it is either possitive , negative or zero. We know $e^k \geq k+1$ and the "=" for $k=0$ but since we know $k \neq 0$ because it was on the denominator on the original equation we get that $e^k> k+1 \implies e^k -1 > k \implies \frac{e^k-1}{k}>1 $ Using the same logic we can see that $e^k-1>x$ So we have that $f'(x) \neq 0 \implies f$ is always increasing or always decreasing . In either case $x=0$ is the only answer !
|algebra-precalculus|
0
Two cards are drawn from a well shuffled pack of $52$ cards. Find the probability that one of them is a red card and the other is a queen.
Two cards are drawn from a well shuffled pack of $52$ cards. Find the probability that one of them is a red card and the other is a queen. My Attempt The relevant cards are $26$ red cards and $2$ black queens i.e. in total $28$ cards. I took four cases. Case 1 : One non-queen red card and one red queen The probability would be $$\frac{\binom{24}{1}\times\binom{2}{1}}{\binom{28}{2}}$$ Case 2 : One non-queen red card and one black queen The probability would be $$\frac{\binom{24}{1}\times\binom{2}{1}}{\binom{28}{2}}$$ Case 3 : Two red queens The probability would be $$\frac{\binom{2}{2}}{\binom{28}{2}}$$ Case 4 : One red queen and one black queen The probability would be $$\frac{\binom{2}{1}\times\binom{2}{1}}{\binom{28}{2}}$$ So the required probability $$=\frac{48+48+1+4}{\binom{28}{2}}=\frac{101}{378}$$ Is the above solution correct?
The ideas presented here are really nice. They're taking each individual case and adding them up. I'm showing this with a singular concept, but requires a good idea of combination. Combination is based upon choosing or choice Let $E$ be the event of picking a queen and a red card Here, You're trying to choose (rather pick, but it's the same logic) 2 cards out of 52. Hence total possible picks = $ \binom{52}{2}$ Now favourable picks are:- 1 red card Possible picks = $ \binom {26}{1}$ 1 queen Possible picks= $ \binom {4}{1}$ It states 1 red card AND 1 queen which means the possible picks will be multiplied to get favourable outcomes= $\binom{26}{1} × \binom {4}{1}$ But we should notice, using this logic, the arrangement of 2 red queens appear twice,[Diamond Queen,Hearts queen] and [Hearts Queen, Diamond Queen]. But these are the same when they're taken as a collection. So we should subtract 1 from total favourable outcomes Hence $$P(E)= \frac{\binom{26}{1} × \binom {4}{1}-1}{\binom{52}{2
|probability|probability-theory|solution-verification|
0
Prove topological conjugacy of an affine and a quadratic map using a linear map
Let $Q_c(x)=x^2+c$ . Prove that if $c , there is a unique $\mu>1$ such that $Q_c$ is topologically conjugate to $F_\mu(x)=\mu(1-x)$ via a map of the form $h(x)=\alpha x+\beta$ . Interpretation: $h(x)$ is a linear map. $Q_c(x)$ and $F_\mu(x)$ are a quadratic and affine map respectively. The claims is that if $c , then $\exists$ $\mu>1$ and the quadratic and affine maps are conjugate to one another. Definition. (Topological conjugacy). Let $Q_c:X\to X$ and $F_\mu: Y\to Y$ , and let $x_1\ne x_2$ . Then $Q_c$ and $F_\mu$ are topologically conjugate if $\exists$ a homeomorphism $h:X\to Y$ , $\ni$ $$h\circ Q_c = F_\mu\circ h$$ or $$h(Q_c(x_1)) = F_{\mu}(h(x_2)).$$ Now, we form, with $x_1\ne x_2$ $$h(Q_c(x_1)) = \alpha (x_1^2+c)+\beta$$ and $$F_{\mu}(h(x_2))= \mu(1-\alpha x_2+\beta) $$ hence, $$\alpha (x_1^2+c)+\beta=\mu(1-\alpha x_2-\beta)$$ which gives \begin{equation} c = -\frac{1}{\alpha}(\beta \mu + \beta - \mu + \alpha x_1^2 + \alpha \mu x_2) \ \ \ \text{where}\ \alpha\ne 0\ \text{and}\
There are lots of quadratic recursion sequences $$ z_{n+1}=az_n^2+bz_n+c. $$ Inserting a linear transformation $z=\alpha x+\beta$ results in another quadratic recursion $$ x_{n+1}=a'x_n^2+b'x_n+c' $$ with the same qualitative properties. Now one can ask what are the most simple, most easy to compare examples in such a transformation class? The linear transformation has 2 free parameters. That allows generically to pose 2 conditions on the transformed coefficients $a',b',c'$ . The condition combinations that have "won" as being popular are $a'=1$ , $b'=0$ resulting in the Mandelbrot iteration, and $c'=0$ , $a'+b'=0$ giving the Feigenbaum/logistic map. These conditions are themselves quadratic equations in the transformation parameters. Thus it is unsurprising that they can have complex solutions. So it can be of interest to ask when, given real coefficients in the original recursion, the transformation parameters are also real. The task asks when the the Mandelbrot map with real $c$ can
|dynamical-systems|
1
Where is my mistake in this integral equation?
I'm given the equation $$\frac{df(x)}{dx}=Ae^{-x}+\int_{0}^{x}G(y)e^{-(x-y)}dy$$ for $0 and $A$ is constant The question require to find $f$ (the integral on y doesn't have to be solved) It appears that the final answer is $$f(\tilde{x})-f(0)=A(1-e^{-\tilde{x}})+\int_{0}^{\tilde{x}}G(y)(1-e^{-(\tilde{x}-y)})dy$$ My question why is it not goes like that: $$f(\tilde{x})-f(0)=A(1-e^{-\tilde{x}})+\int_{0}^{\tilde{x}}G(y)e^{y}dy\int_{0}^{\tilde{x}}e^{-x}dx$$ $$=f(\tilde{x})-f(0)=A(1-e^{-\tilde{x}})+\int_{0}^{\tilde{x}}G(y)e^{y}(1-e^{-\tilde{x}})dy$$ I don't understand why my answer is wrong
From your solution, it appears you are integrating both sides from $0$ to $\tilde{x}$ . Just be careful because for the second term the integration interval depends on $x$ ; thus, you can't just integrate and switch the order of integration. In general, \begin{align} \int_{0}^{\tilde{x}}\left(\int_{a(x)}^{b(x)}h(x,y)dy\right)dx \neq \int_{a(\tilde{x})}^{b(\tilde{x})}\left( \int_{0}^{\tilde{x}}h(x,y)dx\right)dy\end{align} My suggestion would be to differentiate both sides of the original equation and use Leibniz integral rule. You will obtain the following differential equation: \begin{align} \frac{df(x)}{dx} = G(x) - \frac{d^2f(x)}{dx^2} \end{align} Now you can safely integrate both sides and use the original equation to reach the correct answer: \begin{align} f(\tilde{x}) - f(0) &= \int_{0}^{\tilde{x}}G(y)dy - f'(\tilde{x}) + f'(0) \\ &= \int_{0}^{\tilde{x}}G(y)dy - Ae^{-\tilde{x}}-\int_{0}^{\tilde{x}}G(y)e^{-(\tilde{x}-y)}dy + A \\ & = A(1-e^{-\tilde{x}}) + \int_{0}^{\tilde{x}}G(y)(1
|calculus|integration|ordinary-differential-equations|definite-integrals|
1
Defining a Quad Spherical Cube Tile as a Uniform NURBS Surface?
I am trying to create NURBS surface that perfectly fits one face of a Quadrilateralized Spherical Cube (QSC) [also called a Cobb sphere in some contexts, I believe]. I have seen some visualizations of what I'm looking for, but haven't found a good process to determine the control point locations and weights. Here's one example of what I want: two NURBS sphere tiles with control points visible . This is adapted from a paper* that seems to be focused on fluid mechanics, and I don't need to be anywhere near that level of detail. And I can't determine from the paper the exact weights used, or if it satisfies my other constraints. Another, maybe clearer example of what I want the surface to look like: Ideal QSC Tile Views The constraints I'd like to meet (if possible): Control point grids in arbitrary sizes, while still perfectly fitting this surface. 5x5, 6x6, 11x11, etc... at least up to a certain point where we'd lose the benefit of NURBS's simplicity and elegance. Preferably degree 3, b
A bit late, I know, but maybe this is what you want. Tiling the sphere with rational Bézier patches James E. Cobb University of Utah, 1988. https://collections.lib.utah.edu/dl_files/4e/77/4e7746dd53c79f8557272b92b47d2d407da4931a.pdf Table 4 on page 11 gives you control points and weights for a patch representing one-sixth of a sphere.
|polynomials|euclidean-geometry|surfaces|bezier-curve|spline|
0
Faster way to find the eigenvalues of a 4x4 real matrix?
I want to calculate the eigenvalues and eigenspaces of this matrix for self-study: $\frac{1}{31}\left( \begin{array}{rrr} 43 & 9 & -23 & -61\\ 16 & -19 & -10 & 22 \\ 130 & 51 & -89 &-108 \\ 36 & -4 & -7 & -59\\ \end{array}\right)$ I tried using the normal method of finding $det(A - \lambda I)$ , but just these massive numbers kept coming up. I also tried using different block matrix formulae, but I ended up with the same problem. I wanted to ask if there was maybe a better more efficient way of finding the eigenvalues using some trick. I know that the trace of this matrix and therefore the sum of the eigenvalues is -4, but beyond that suggestions are appreciated. I am not looking for a full solution here.
Hint: It is not too difficult to compute the characteristic polynomial of your matrix $A$ . It is given by $$ \chi_A(t)=(t+1)^4. $$ Let $$J=\begin{pmatrix} -1 & 1 & 0 & 0 \cr 0 & -1 & 1 & 0 \cr 0 & 0 & -1 & 1 \cr 0 & 0 & 0 & -1 \end{pmatrix} $$ It is easy to find an invertible matrix $S$ such that $SA=JS$ by solving a system of linear equations in the entries of $S$ . In other words, your matrix is similar to $J$ , and you can read off all invariants.
|linear-algebra|matrices|eigenvalues-eigenvectors|determinant|characteristic-polynomial|
0
inverse laplace transform of $\frac{1}{s+b}e^{-x\sqrt{\frac{s}{k}}}$
I am attempting to find the inverse laplace transform of $\frac{1}{s+b}e^{-x\sqrt{\frac{s}{k}}}$ The solution should be $$\frac{e^{-bt}}{2} ( {e^{x\sqrt{\frac{-b}{k}}}\ erfc\left(\frac{x+2kt\sqrt{\frac{-b}{k}}}{2\sqrt{kt}}\right)+e^{-x\sqrt{\frac{-b}{k}}}erfc\left(\frac{x-2kt\sqrt{\frac{-b}{k}}}{2\sqrt{kt}}\right)})$$ I attempted changing $s+b$ to $s'$ which allowed me to find the inverse Laplace of $\frac{1}{s'}e^{-x\sqrt{\frac{s'-b}{k}}}$ , however the solution to this was missing the $\frac{e^{-bt}}{2}$ . I.e , the solution was $$( {e^{x\sqrt{\frac{-b}{k}}}\ erfc\left(\frac{x+2kt\sqrt{\frac{-b}{k}}}{2\sqrt{kt}}\right)+e^{-x\sqrt{\frac{-b}{k}}}erfc\left(\frac{x-2kt\sqrt{\frac{-b}{k}}}{2\sqrt{kt}}\right)})$$ I also tried using the convolution theorem with $F(s) = \frac{1}{b+s}$ and $G(s) = e^{-x\sqrt{\frac{s}{k}}}$ , but was unable to solve the convolution integral. Any help would be appreciated.
I'm very interested in the correct solution of this problem. So I also tried the convolution approach $$f(x,t)=\mathcal{L}_s^{-1}\left[\exp \left(-x \sqrt{\frac{s}{k}}\right)\right](t)=\frac{k x e^{-\frac{x^2}{4 k t}}}{2 \sqrt{\pi } \sqrt{k^3 t^3}}$$ $$g(t)=\mathcal{L}_s^{-1}\left[\frac{1}{s+b}\right](t)=e^{-b t}=\sum_{j=0}^{\infty} \frac{(-b t)^j}{j!}$$ Now we evaluate the convolution integral $$\int_{0}^{t} f(x,\tau)\cdot g(t-\tau)\ d\tau=\int_0^{t} \sum_{j=0}^{\infty} \frac{k x e^{-\frac{x^2}{4 k \tau }} (-b (t-\tau ))^j}{2 \sqrt{\pi } j! \sqrt{k^3 \tau ^3}}\ d\tau$$ We exchange integration and infinite sum and get (Mathematica helps) $$u(x,t)=\sum_{j=0}^{\infty} \int_0^{t} \frac{k x e^{-\frac{x^2}{4 k \tau }} (-b (t-\tau ))^j}{2 \sqrt{\pi } j! \sqrt{k^3 \tau ^3}}\ d\tau=\sum_{j=0}^{\infty} (-b t)^j \left(\frac{\, _1F_1\left(-j;\frac{1}{2};-\frac{x^2}{4 k t}\right)}{\Gamma (j+1)}-\frac{x\cdot \, _1F_1\left(\frac{1}{2}-j;\frac{3}{2};-\frac{x^2}{4 k t}\right)}{\Gamma \left(j+\frac{1}{
|inverse-laplace|
0
Where is my mistake in this integral equation?
I'm given the equation $$\frac{df(x)}{dx}=Ae^{-x}+\int_{0}^{x}G(y)e^{-(x-y)}dy$$ for $0 and $A$ is constant The question require to find $f$ (the integral on y doesn't have to be solved) It appears that the final answer is $$f(\tilde{x})-f(0)=A(1-e^{-\tilde{x}})+\int_{0}^{\tilde{x}}G(y)(1-e^{-(\tilde{x}-y)})dy$$ My question why is it not goes like that: $$f(\tilde{x})-f(0)=A(1-e^{-\tilde{x}})+\int_{0}^{\tilde{x}}G(y)e^{y}dy\int_{0}^{\tilde{x}}e^{-x}dx$$ $$=f(\tilde{x})-f(0)=A(1-e^{-\tilde{x}})+\int_{0}^{\tilde{x}}G(y)e^{y}(1-e^{-\tilde{x}})dy$$ I don't understand why my answer is wrong
Assuming you know the Laplace Transform $$ s\hat f(s) = \frac{A}{s+1}+\hat G(s)\frac{1}{s+1}+ f_0 $$ or $$ \hat f(s) = \left(\frac 1s-\frac{1}{s+1}\right)A+\left(\frac 1s-\frac{1}{s+1}\right)\hat G(s) + \frac{f_0}{s} $$ and anti-transforming we have $$ f(x) = (1-e^{-x})A+\int_0^x G(\zeta)d\zeta - \int_{0}^{x}G(y)e^{-(x-y)}dy + f_0 $$
|calculus|integration|ordinary-differential-equations|definite-integrals|
0
Convergence of measures in symbolic dynamics
Let $T$ be the one-sided shift on sequences with $k$ symbols. What are the sufficient conditions for $\mu_n \to \mu$ where $\mu_n$ are all invariant measures? Is it sufficient to show that for each cylinder $A$ , $\mu_n(A)\to\mu(A)$ ? I think yes, since I think characteristic functions of cylinders $\chi_A$ are dense in continuous functions, but I have not been able to prove this claim, or find anything with google searches.
I think the first observation to note is that cylinders in $[k]^{\mathbb{N}}$ are balls with respect to an ultra-metric of agreeing around the origin. This metric induces the standard product topology on the one-sided shift space. Next note that the space is compact and so any continuous function $f:[k]^{\mathbb{N}}\to \mathbb{C}$ is uniformly continuous. So for every $\epsilon>0$ there exists $\delta>0$ such that $\omega, \eta \in [k]^{\mathbb{N}}$ with $d(\omega,\eta) implies that $\vert f(\omega)-f(\eta) \vert . There exists a length $\ell\in \mathbb{N}$ , such that $\omega(j)=\eta(j)$ for all $1\leq j \leq \ell$ if and only if $d(\omega,\eta) . Then consider all the indicators $\mathbf{1}_A$ , where $A$ is any cylinder of length $\ell$ . Choosing for each cylinder $A$ an element $\omega_A\in A$ , we define that $f_\epsilon:=\sum_{A} f(\omega_A)\cdot \mathbf{1}_A$ . Then, $\Vert f-f_\epsilon\Vert_{\infty} . So the linear span of indicator functions is dense in $C\big( [k]^{\mathbb{N
|measure-theory|dynamical-systems|ergodic-theory|
0
Calculus: need help with if sum of a series converges or not
The series is: $\sin(x) + \sin^3(x) + \sin^5(x) + ...$ where $x$ tends to $\frac{\pi}{2}$ . I am getting two different results by two different methods. Method $1$ : since in this case my common factor is less than $1$ , I can use the infinite geometric progression sum formula which gives me the result positive infinity. Method $2$ : If first, I take the series to be $n$ terms and use the geometric progression sum formula, it gives me $\frac00$ form. Applying L’Hôpital to it gives me the sum to be: $n\cdot(\sin(x))^{2n-2}$ where $x$ tends to $\pi/2$ and $n$ tends to infinity. This gives the sum to be $0$ as anything less than $1$ to the power $0$ tends to be $0$ . I am probably missing some assumption in one of the methods wrong leading it to be wrong but can't figure out which one.
Assuming $0≤x , $$\sin(x) This gives way to summation of infinite GP formula( $S=\frac{a}{1-r}$ , $a$ =first term, $r$ =common ratio), as your series can be represented as $$\sum_1^\infty \sin^{2n-1}(x) = \sin (x) + \sin^3(x)+\sin^5(x)+...=\frac{\sin(x)}{1-\sin^2(x)}=\frac{\sin (x)}{\cos^2 (x)} $$ Now $$x→\frac{π}{2} \implies S=\lim_{x\to\frac{π}{2}} \frac{\sin(x)}{\cos^2(x)}=∞$$ Hence this is a divergent series. Graphical proof:- Your logic was correct. In the second formula too, where you took n, $$S=\lim_{n\to \infty} \sin x\frac{1-\sin^{2n}(x)}{1-\sin^2 x}$$ will ultimately give the same value of $S$ . Putting limits of x directly to S before $\lim_{n→∞}$ gives $\lim_{n→∞}( \lim_{x→\frac{π}{2}}\sin x\frac{1-\sin^{2n}(x)}{1-\sin^2 \frac{π}{2}})=\lim_{n→∞}\sin^{2n-2}\frac{π}{2}=0$ But this operation is faulty because you aren't taking the $n→∞$ part into account, and assuming $n$ is finite while solving the first limit .
|sequences-and-series|limits|
0
Solve $\int_{\frac{-π}{4}}^{\frac{π}{4}} \frac{(\sec^6 x - \tan^6 x)}{(a ^ x + 1) \cos^2 x} dx , (a > 0)$
This integral involves secant ( $\sec(x)$ ) and tangent ( $\tan(x)$ ) functions with even powers, and a cosine squared ( $\cos^2(x)$ ) term in the denominator. I try to solve it using a combination of trigonometric identities and a strategic substitution. Here's how i approach it: Reduce $\sec^6(x) $ and $\tan^6(x)$ : We can use the identity $$\sec^2(x) = 1 + \tan^2(x)$$ to rewrite $\sec^6(x)$ . $$\sec^6(x) = (\sec^2(x))^3 = (1 + \tan^2(x))^3$$ Similarly, we can rewrite $\tan^6(x)$ using the same identity. $$\tan^6(x) = (\tan^2(x))^3$$ Substitute with a new variable: Let's introduce a new variable, say $u = \tan(x)$ . This substitution allows us to express $\sec(x)$ in terms of $u$ using the identity $$\sec(x) = \frac{1}{\cos(x)} = \sqrt{1 + \tan^2(x)} = \sqrt{1 + u^2}$$ Rewrite the integral with the new variable: Substitute $\sec(x)$ and $\tan(x)$ with their expressions in terms of $u$ and rewrite the differential term $dx$ using $$du = \sec^2(x) dx$$ . The integral becomes: $$\int_{\
First, we must get rid of the annoying $a^x+1$ term. Substitute $-x \rightarrow x$ in the original equation. You get \begin{align} I = \int_{\frac{-π}{4}}^{\frac{π}{4}} \frac{(\sec^6 x - \tan^6 x)}{(a ^ {-x} + 1) \cos^2 x} dx \end{align} Add this to the original integral to get \begin{align} 2I &= \int_{\frac{-π}{4}}^{\frac{π}{4}} \frac{(\sec^6 x - \tan^6 x)}{ \cos^2 x}\left( \frac{1}{a ^ {x} + 1} + \frac{1}{a ^ {-x} + 1} \right) dx \\ &= \int_{\frac{-π}{4}}^{\frac{π}{4}} \frac{(\sec^6 x - \tan^6 x)}{ \cos^2 x} dx \end{align} Now, apply the change of variable $u = \tan(x)$ to above to reach \begin{align} 2I &= \int_{-1}^{1} \left((1+u^2)^3 - u^6\right) du \end{align} The rest is easy. The final answer is $I = 2.6$ .
|calculus|
0
Find the integral $\int_{0}^{\infty} \frac{\log(x)\operatorname{arccot}(x)}{\sqrt{x}}\,dx$
Consider $\displaystyle \int_{0}^{\infty}\frac{\log(x)\cot^{-1}(x)}{\sqrt{x}}$. I've tried: $\displaystyle F(a,b) = \int_{0}^{\infty} \frac{\log(ax)\cot^{-1}(bx)}{\sqrt{x}}$, so $\displaystyle F''(a,b) = \int_{0}^{\infty}\frac{dx}{a\sqrt{x}(1+b^2x^2)} = \int_{0}^{\infty}\frac{dt}{2a(1+x^4b^2)} = \frac{\pi}{2a\sqrt{2b}}$. So $F'_{a}(a,b) = \frac{\pi}{2a}\sqrt{b}+C(a)$, also we can make for $a$. But how can we find constant? It's easy to see that $C(a) = 0$, what about $C(b)$? If we consider $F'_{b}(a,b) = \frac{\pi\log(a)}{\sqrt{8b}} + C(b)$ then it isn't easy to find it. Any ideas? edit also I thought about consider $\cot^{-1}(bx)$ and then make a substitution $t = \frac{1}{1+x}$ and represent $\log$ as series
More generally \begin{align} &\int_{0}^{\infty} \frac{\ln x\cot^{-1}ax}{\sqrt{x}}\,dx\\ = &\int_{0}^{\infty} \int_{0}^{\frac1a} \frac{\sqrt x\ln x}{y^2+x^2} \overset{y\to y^2}{dy}\overset{x\to x^2}{dx} =\int_{0}^{\frac1{\sqrt a}} \int_{0}^{\infty} \frac{8y x^2\ln x}{y^4+x^4 }dx\ dy\\ =& \ \frac\pi{\sqrt2} \int_{0}^{\frac1{\sqrt a}} (\pi +4\ln y)dy= \frac{\pi}{\sqrt {2a}}(\pi -4-2\ln a) \end{align}
|integration|sequences-and-series|
0
If $u^2 \ge -\dfrac{8}{3}$, then $u \ge -\sqrt{\dfrac{8}{3}}$.
If $u^2 \ge -\dfrac{8}{3}$ , then $u \ge -\sqrt{\dfrac{8}{3}}$ . Is this the correct convention? I was confused because initially I thought the negative sign would go inside the square root, but then that would lead to imaginary numbers. Thanks.
No, this is wrong. In general for any real $u$ $$u^2 \geq 0,$$ so your starting inequality is trivially true. What is in general true is that for any non-negative $a\geq 0$ $$u^2\geq a \iff u \geq \sqrt{a}\ \lor\, -\sqrt{a}\geq u\iff u \in (-\infty,-\sqrt{a}]\ \cup \ [\sqrt{a},+\infty) $$ and conversely $$u^2\leq a \iff -\sqrt{a}\leq u \leq \sqrt{a} \iff u \in [-\sqrt{a},\sqrt{a}].$$
|algebra-precalculus|
1
Prove $x \in \mathbb{R} $ if $x^3 + 2x + 1$ is an integer divisible by 3, then $x$ is an irrational number
proof: Suppose that $3| x^3+2x+1$ and $x$ is a rational number, $\frac{p}{q}$ , $gcd(p, q) = 1, q \ne 0$ sub $\frac{p}{q}$ into $x^3+2x+1$ : $\frac{p^3+2pq^2+q^3}{q^3} =3d$ for some integer d $p^3+2pq^2+q^3 =3dq^3$ this means that, $3|(p^3+2pq^2+q^3)$ I got stuck from this point onwards and could not find a contradiction, any hints on how should I proceed with the proof?
It should be $3\mid (p^3+2pq^2+q^3)$ . If $p\equiv 0\pmod 3 $ then we must have $3\mid q^3$ . As $3$ is prime this implies that $3\mid q$ , contradicting the fact that $\gcd(p,q)=1$ . If $p\equiv 1\pmod 3$ we must have $1+2q^2+q^3\equiv 0 \pmod 3$ . Both $q\equiv 0\pmod 3$ and $q\equiv 1\pmod 3$ would lead to $1\equiv 0\pmod 3$ and $q\equiv 2\pmod 3$ would mean $2\equiv 0 \pmod 3$ , so this cannot be the case either. If $p\equiv 2\pmod 3$ necessarily $3\mid 2+q^2+q^3$ . As before, all three possibilities for $q$ lead to contradictions. Thus, $x$ may not be expressed as an irreducible fraction and therefore must be irrational.
|discrete-mathematics|
0
cdf for first success
I had two ways of computing the CDF for a first success distribution. $ X \sim FS(p)$ $p_X(x) = P(X = x) = (1 - p)^{x-1}p = q^{x-1}p$ Method 1: $$F_X(x) = P(X \leq x)= \sum_{i = 1}^x q^{i-1}p$$ $$= p\sum_{i=1}^xq^{i-1}$$ $$= \frac{p}{q}\sum_{i=1}^{x} q^i$$ $$= \frac{p}{q}\frac{1 - q^{x}}{1-q}$$ $$= \frac{p}{q}\frac{1 - q^{x}}{p}$$ $$= \frac{1 - q^{x}}{q}$$ Method 2: Note that $P(X > x)$ is the probability of at least $x$ failures in a row. $$F_X(x) = P(X \leq x)= 1 - P(X > x)$$ $$= 1 - q^x$$ Why is one of these incorrect?
In method 1, $$ \sum_{i=1}^{x} q^{i-1} = \frac{1-q^x}{1-q}, $$ which is the reason why your computation is wrong. So, \begin{align*} F_X(x) &= \sum_{i=1}^{x} q^{i-1} p \\ &= p \sum_{i=1}^{x} q^{i-1} \\ &= p \frac{1-q^x}{1-q} \\ &= 1 - q^x. \end{align*}
|probability|
0
Maximizer of a function varying smoothly depending on a parameter
Let $f:\mathbb{R}^2\to \mathbb{R}$ be a smooth function such that for all $t$ $f_t:=f(t,\cdot)$ has a unique maximum at $x^*(t)$ (e.g. $f_t$ is a strictly concave function for all $t$ ). My question is: is the function $t\mapsto x^*(t)$ smooth in general? If not, are there some reasonable conditions on $f$ that guarantee its smoothness?
Thinking about it a bit more I realized that under the assumption of strict concavity the answers follows simply from the implicit function theorem since $\frac{\partial f}{\partial x}(t,x^*(t))=0$ and $\frac{\partial^2 f}{\partial x^2}(t,x^*(t)) .
|real-analysis|optimization|convex-optimization|
0
Discountinous function has a zero.
Let $f,g: [a,b] \to \mathbb{R}$ where $g$ is continous function, $f+g$ is non-decreasing and $f(a)>0>f(b)$ . Prove that we can find a point $c$ such that $f(c)=0$ . Given that $f+g$ is non-decreasing I know that it has at most countable number of points where it's non continous and all of them are jump discontinuities so left and right limit in each point exists. Given that right and left limit of $f+g$ and of $g$ exists we see that also $f$ has right and left limit in each point. However I'm stuck how to proceed with that proof. Any help will be greatly appreciated.
Here is another proof. Lemma $f$ has only upward jump discontinuities i.e. $\lim_{x \to c^-} f(x) \le f(c)$ for all $c \in (a,b]$ and $\lim_{x \to c^+} f(x) \ge f(c)$ for all $c \in [a, b)$ . (In particular, the one-sided limits exist in $(-\infty, \infty]$ .) Proof We note that if $h$ is non-decreasing, then $\lim_{x \to c^{-}}h(x) = \sup_{x . Thus, $$\lim_{x \to c^{-}} (f(x) +g(x)) \ge f(c) + g(c)$$ But $\lim_{x \to c^{-}}g(x) = g(c)$ by the continuity of $g$ . Therefore, we deduce the result as claimed for $x \to c^-$ . Similarly for the other limit. Proposition $f$ has a zero in $[a,b]$ . Proof Let $\hat{f}$ be defined by $\hat{f}(x) = \inf_{y \le x} f(y)$ for all $x$ such that this infimum is not $-\infty$ . Then, $\hat{f}$ is continuous since $f$ only has upward jump discontinuities. Further, $f(a) > 0$ implies $\hat{f}(a) >0$ and $f(b) implies $\exists c : \hat{f}(c) . Therefore, $\hat{f}$ has a zero by the IVT. Let $z$ be the greatest zero for $\hat{f}$ ; this exists by continu
|real-analysis|continuity|
0
Question regarding least value of trigonometric function
The question regarding which i am asking this question is as follows Prove that that least positive value of $x$ , satisfying $\tan(x) = x+1$ , lies in the interval $(\frac{\pi}{4}, \frac{\pi}{2})$ The solution to this question in my text book is done using a graph, however this method becomes infeasible for more complex equations. Is their any other way to solve this maybe analytically, using derivatives or something, anything i tried seemed to fail could someone please help me with this one. Is their a analytical we to find the smallest intersection of 2 functions? Any help would be appreciated!
For $x\in[0,\frac{\pi}{4})$ we have $$\tan(x) and $\tan(\pi/4)=1\neq \pi/4+1$ so the first intersection must happen after $\pi/4$ . Now we just need to prove there is an intersection in $(\pi/4,\pi/2)$ . For this, define $g(x)=\tan(x)-x-1$ . We have $g(\pi/4)=-\pi/4 . Moreover, as $$\lim_{x\to\pi/{2}^-} \tan(x)=+\infty $$ and $$\lim_{x\to\pi/{2}^-} -x-1=-\pi/2-1$$ we have $$\lim_{x\to\pi/{2}^-} g(x)=+\infty$$ Therefore, there exists some $y\in (\pi/4,\pi/2)$ such that $g(y)>0$ . As $g$ is continuous and $g(\pi/4) 0$ by the IVT there exists some $c\in(\pi/4,y)$ such that $g(c)=0$ , that is, $\tan(c)=c+1$ .
|calculus|trigonometry|
1
Prove $x \in \mathbb{R} $ if $x^3 + 2x + 1$ is an integer divisible by 3, then $x$ is an irrational number
proof: Suppose that $3| x^3+2x+1$ and $x$ is a rational number, $\frac{p}{q}$ , $gcd(p, q) = 1, q \ne 0$ sub $\frac{p}{q}$ into $x^3+2x+1$ : $\frac{p^3+2pq^2+q^3}{q^3} =3d$ for some integer d $p^3+2pq^2+q^3 =3dq^3$ this means that, $3|(p^3+2pq^2+q^3)$ I got stuck from this point onwards and could not find a contradiction, any hints on how should I proceed with the proof?
$x$ cannot be an integer because we have $x^3+2x+1=3a$ where $a$ is an integer and $x^3+2x+1\equiv1\pmod3$ . Suppose $x=\dfrac pq$ with $(p,q)=1$ so we have $$\frac{p^3+2pq^2}{q^3}=(3a-1)\in\Bbb Z$$ This implies $$p^3\equiv0\pmod q\iff p^3=mq \text { where m is an integer }$$ But then all prime factor of $q$ divides $p^3$ which contradicts that $p$ and $q$ are coprimes. Then $x$ should be irrational.
|discrete-mathematics|
0
How should these types of integrals be solved? $\int\frac{1}{y}\frac{1}{\ln y}$
$$\int\frac{1}{y}\frac{1}{\ln y}dy$$ $$\int\frac{1}{y}\frac{1}{\ln^2y}dy$$ I know that i should use some substitution, but i don't understand how.Is there any other way than substitution? I've tried to understand the integral-calculator.com output, but no luck. Can anyone explain to me how to solve these integrals?
We have that $(\ln(y))'=\dfrac{1}{y}$ . It is well known that for every differentiable function $f$ and $n\neq -1$ , $$\displaystyle\int f'(x)f(x)^n=\dfrac{f(x)^{n+1}}{n+1}+C$$ Therefore, if $k\in\mathbb{Z}$ and $k\neq 1$ , we will have $$\displaystyle\int \dfrac{1}{y\ln(y)^k}=\dfrac{\ln(y)^{1-k}}{1-k}+C$$ If $k=1$ , $$\dfrac{1}{y\ln(y)}=\dfrac{1/y}{\ln(y)}$$ As the numerator is the derivative of the denominator, primitives will be of the form $\ln(|\ln(y)|)+C$ .
|integration|substitution|
0
$2^x-x^2+x+\cos(x)=0$ has one real root
We just need to show that $2^x-x^2+x+\cos(x)=0$ has exactly one real root. How I approached it : Let $f(x)=2^x-x^2+x+\cos(x)$ $f$ is differentiable with $f'(x)=\log2 \cdot 2^x -2x +1 - \sin(x)$ We can see that for $x \in (-\infty,0) , f'(x) \gt 0 \implies f$ is always increasing . Now we can find the range of $f$ for $x and it turns out to be $f((-\infty ,0))=(-\infty , 2)$ . So $f$ has exactly one root for $x But this is how far I've come so far . For possitive $x$ I cannot seem to figure out how to work it out. Any help would be much appreciated!
I didn't manage to solve this problem but here is how i reasoned for the case $x\geq0$ .(Also i'm not really sure on how to formalize the last part) I found out that in $(0,1)$ the function is decreasing because: $$x\in(0,1)\Rightarrow f'(x)\approx -2^x(\frac{log^2(2)}{3})-2x+1$$ and when we ask it to be greater than 0 we have $$2^x(\frac{log^2(2)}{3})+2x\leq-1.$$ Which is never satisfied. for $x>1$ I realized that there must be an absolute minimum considering that the function it's nonnegative and decreasing in $[0,1]$ but $\lim_{x\to\infty}f(x)=+\infty$ . The value of $f'(3)$ is positive, so the minimum is in $[1,3]$ . We might find an absurd through the Rolle's theorem. For example let's assume that $x_0$ is a positive root of $f$ and is also the minimum, if there where another positve root $x_1$ by Rolle's theorem we should have a point $x_2\in[x_0,x_1]$ (assuming WLOG $x_1\geq x_0$ ) such that $f'(x_2)=0$ but we know that $f$ is nonnegative in $[0,+\infty)$ so $x_2$ is a local max
|calculus|functions|
0
How does the formal definition of limit deal with functions that are not defined at multiple points, maybe even entire intervals?
So the formal definition: $(\forall \epsilon > 0)(\exists \delta > 0)(\forall x \in \mathbb{R})(0 If this is true then we say $\lim_{x \to x_0} f(x) = L$ . In my opinion, this really only makes sense if the function is defined for all values of $\mathbb{R} - \{x_0\}$ . Say the function is not defined at a point different from $x_0$ , call it $x_1$ , then the expression $\mid f(x_1) - L \mid doesn't make sense and we can't really assign a truth value to it, and so we can't really know if a specific $\delta$ that includes $x_1$ is good or not. I thought about it and came up with two possible solutions, but I have my doubts about both. First we could modify the definition of limit so that it only works for values where the function is defined, for instance if $A \subseteq \mathbb{R}$ and the function is defined as $f:A \to\mathbb{R}$ , we could tweak the formal definition so that it says: $(\forall \epsilon > 0)(\exists \delta > 0)(\forall x \in A)(0 This solution has a problem and the pr
For us to talk about the limit of $f$ at $x_0$ , we require $x_0$ to be an accumulation point of the domain of $f$ , that is, that for every $c>0$ there is some $x\in \operatorname{Dom}(f)$ , with $x\neq x_0$ , in $(x_0-c,x_0+c)$ . This is equivalent to saying that there exists a sequence in the domain of $f$ that converges to $x_0$ but is never actually equal to $x_0$ . In other case, $x_0$ is called an isolated point (of the domain) and it does not make sense to talk about the limit of $f$ at $x_0$ .
|calculus|limits|epsilon-delta|
0
Is it true that if $h \in L^{1}(\mathbb{R})$ such that $\hat{h} \in L^{2}(\mathbb{R})$ then in fact $h \in L^{2}(\mathbb{R})$
Is it true in general that if $h \in L^{1}(\mathbb{R})$ such that $\hat{h} \in L^{2}(\mathbb{R})$ , where $\hat{h}$ is the Fourier transform of $f$ , then in fact $h \in L^{2}(\mathbb{R})$ ? If so how one can go by proving it? Otherwise is there a counter-example? Thanks.
Yes, it is, due to Plancherel theorem . The Fourier transform is defined for functions in $L^1$ initially; however, Plancherel theorem states that $\|h\|_2 = \|\hat{h}\|_2$ (or, more generally, that $\langle f,g \rangle = \langle \hat{f},\hat{g} \rangle$ ). Since $\hat{h} \in L^2$ , one has thus $\|h\|_2 = \|\hat{h}\|_2 , hence $h \in L^2$ . This theorem is actually responsible for the Fourier transform $\mathscr{F} : L^2(\Bbb{R}) \to L^2(\Bbb{R})$ to be an isometry inside $L^2$ (i.e. a unitary automorphism of $L^2$ ).
|fourier-analysis|fourier-transform|
0
Solve the system of equations $x+y+z=1, x y z=1,|x|=|y|=|z|=1, x, y, z \in \mathbb{C}$
Solve the system of equations $x+y+z=1, x y z=1,|x|=|y|=|z|=1, x, y, z \in \mathbb{C}$ I tried with polar representation, letting $x=\exp{(ia)}$ , $y=\exp{(ib)}$ , $z=\exp{(ic)}$ , with this I got $a+b+c=2 \pi n$ , so the three numbers are essentially three points on a unit circle, with their centroid being the point $1/3$ . But I don't know how to proceed.
Use $xy= 1/z = \overline{z}$ and similar to conclude that $xy+xz+yz = \overline{x+y+z} = 1$ . Then $x, y, z$ are the solutions of the cubic equation $t^3 - (x+y+z)t^2 + (xy+xz+yz)t - xyz=0$ , hence of $t^3-t^2+t-1=0$ . That equation can be written as $(t-1)(t^2+1)=0$ , hence the solutions are $i$ , $-i$ , and $1$ .
|complex-numbers|
1
Intuitive understanding of multipying a matrix by a vector and its hermitian transpose
I keep seeing $$\overline{a}^H \text{ } \overline{R}_{xx} \text{ } \overline{a} = \text{single value}$$ Where $\overline{a}$ is a mx1 vector and $R_{xx}$ is a mxm autocorrelation matrix of another vector $\overline{x}$ (not same as $\overline{a}$ ) But I am struggling to see what this multiple of three matrices may intuitively be doing and the purpose. Why multiply a matrix by a vector and also the hermitian transpose of that same vector. I am studying phased array antennas and direction of arrival and beamweight methods. Many thanks,
In the context of array signal processing, the quantity $a^{H}R_{xx}a$ denotes a power as a function of the vector $a$ . E.g., $a$ denotes a steering vector (with some direction-dependent underlying parameters), or more general, a filter vector. To see why this quadratic form refers to a power, let's check first the definition of the correlation matrix / covariance matrix. Assuming zero-mean signals $x$ , the matrix $R$ is defined as \begin{equation} R_{xx} = \mathcal{E}\left[xx^{H}\right] \end{equation} with $\mathcal{E}\left[\cdot\right]$ denoting the expectation operator (assuming that x is a random variable) and $(\cdot)^H$ denotoing conjugate transpose. Hence, the quadratic reads \begin{equation} a^{H}R_{xx}a = a^{H}\mathcal{E}\left[xx^{H}\right]a = \mathcal{E}\left[a^{H}xx^{H}a\right] = \mathcal{E}\left[|a^{H}x|^{2}\right] \geq 0 \end{equation} .
|linear-algebra|covariance|symmetric-matrices|correlation|
0
What's the product rule for the exponential differential operator?
So I was thinking say you have a linear differential operator such as the exponential differential one which is renown in some fields in physics: $$e^{\mathrm D_x}\equiv\sum_{n=0}^\infty\frac{\mathrm D_x^n}{n!},\text{ where } \mathrm D_x^n\equiv\frac{\mathrm d^n}{\mathrm dx^n},$$ and you applied it to a product of functions $u(x)\cdot v(x)$ . Then what would the "product rule" be for this operator? I.e., $$e^{\mathrm D_x}[u\cdot v]=?$$ APPROACH: Is this it? $$\sum_{n=0}^\infty\frac{1}{n!}\frac{\mathrm d^n}{\mathrm dx^n}(u\cdot v)=\sum_{n=0}^\infty\frac{1}{n!}\sum_{i=0}^n{n\choose i}u^{(n-i)}v^{(i)}=\sum_{n=0}^\infty\sum_{i=0}^n\frac{u^{(n-i)}}{(n-i)!}\frac{v^{(i)}}{i!}\overset{j=n-i}{=}\sum_{i=0}^\infty\sum_{j=0}^\infty\frac{u^{(j)}}{j!}\frac{v^{(i)}}{i!}$$
Your approach is correct. You can go on by introducing an auxiliary variable $t$ in order to make the respective Taylor series of $u$ and $v$ appear, as it follows : $$ \begin{align} e^{D_x}(uv)(x) &= \sum_{i,j\ge0} \frac{u^{(i)}(x)}{i!}\frac{v^{(j)}(x)}{j!} \\ &= \left[\sum_{i=0}^\infty \frac{u^{(i)}(x)}{i!}t^i \sum_{j=0}^\infty \frac{v^{(j)}(x)}{j!}t^j\right]_{t=1} \\ &= \left[u(x+t)v(x+t)\right]_{t=1} \\ &= u(x+1)v(x+1) \end{align} $$ This result is not surprising, since the operator $e^{D_x}$ is nothing else than the translation operator.
|calculus|derivatives|taylor-expansion|differential-operators|
1
Same number of poles an ceros of rational complex map
Let $R(z) = \frac{P(z)}{Q(z)}$ with $P,Q$ polynomials in $\mathbb{C}[z]$ , such that $P^{-1}(0) \cap Q^{-1}(0) = \emptyset$ , then $R: \overline{\mathbb{C}}\to \overline{\mathbb{C}}$ defining $R(z) = \infty$ if $z \in Q^{-1}(0)$ There is a claim that taking with multiplicity ceros and poles of $R$ we have: $$\# \text{Zeros of }R = \# \text{Poles of }R = max\{deg(P),deg(Q)\}$$ and i have trouble with the case $deg(P)>deg(Q)$ as $R(\infty) \neq 0$ then $R(z) = 0 \iff P(z) = 0$ and the amount of zeros is $deg(P)$ but the amount of poles is $deg(Q) \neq deg(P) $ The case $deg(P) is okay because $\infty$ will be a zero of multiplicity $deg(Q)-deg(P)$ plus de zeros os $P$ wich are deg(P) we get $deg(Q)$ zeros. I think the problem may be in the definition of poles od a rational funtion, that i think is the amount of zeros of the denominator.
Thanks for the help in the coments, the problem was that in the case of $deg(P)>deg(Q)$ , $\infty$ is not a cero but it is a pole since $R(\infty) = \infty$ , with multiplicity the degre of cero in $R(1/z)$ and in that case multiplying by $\frac{z^{deg(P)}}{z^{deg(P)}}$ we end up in the case that did work.
|complex-analysis|polynomials|
0
Calculate the improper integral $\int_{B(\mathbf{0}, 1)} \frac{\mathrm{d} x \mathrm{~d} y \mathrm{~d} z}{1-a x-b y-c z}$
Problem : Assume $ a^2 + b^2 + c^2 = 1 $ . Calculate the improper integral $\int_{B(\mathbf{0}, 1)} \frac{\mathrm{d} x \mathrm{~d} y \mathrm{~d} z}{1-a x-b y-c z}$ where $B(\mathbf{0}, 1)=\left\{x^2+y^2+z^2 \leq 1\right\}$ is the unit ball in $ \mathbb{R^3}$ . Attempt : Assuming $ a,b,c \neq 0 $ ( I had difficulty with the simpler cases as well where for example $ a=1 , b=c=0$ ) I performed changed of variables $ u = 1-ax-by-cz , v = by , w = cz \iff x = \frac{1-u-v-w}{a} , y =v/b , z = w/b $ , the absolute value of the jacobian will be $ \frac{1}{abc} $ and I get that the integrand will be $ \frac{1}{abc} \cdot \frac{1}{u} $ , the problem is, I'm having difficulty determining the new set under integration according to the diffeomorphism ( induced by the change of variables ) hence I can't proceed to calculate the integral. I know the new set of integration will have $ (\frac{1-u-v-w}{a})^2 + (v/b)^2 + (w/c)^2 \leq 1 $ but I don't know how to continue and hopefully, to use Fubini's the
As the domain as well as the measure of the integral $$I(\vec{n})=\!\!\!\int\limits_{B(0, 1)}\! \! d^3x \; \frac{1}{1- \vec{n}\cdot \vec{x}}, \qquad |\vec{n}|=1,$$ are invariant under rotations, we have $$I(R \,\vec{n})=I(\vec{n}) \quad \forall \,R \in {\rm SO(3)},$$ i.e. the result is independent of the direction of the unit vector $\vec{n}=(a,b,c)^T$ . Using spherical coordinates $$\begin{align} x_1=r \, \sin \theta \, \cos \varphi, \quad x_2=r \,\sin \theta \,\sin \varphi, \quad x_3= r \, \cos \theta \end{align}$$ and choosing $n_1=n_2=0, \; n_3=1$ , we obtain (with $u= \cos \theta$ ) $$\begin{align} I(\vec{n})&= \int\limits_0^1 \!dr \,r^2 \int\limits_{-1}^1 \! du \int\limits_0^{2\pi}\! d\varphi \;\frac{1}{1-r \,u}\\[5pt] &=2\pi\int\limits_0^1 \! dr \, r^2 \int\limits_{-1}^1 \! \frac{du}{1-r \, u}\\[5pt] &=2\pi \int\limits_0^1\! dr \, r \, \log\frac{1+r}{1-r} \\[5pt]&=2\pi. \end{align}$$
|real-analysis|integration|improper-integrals|multiple-integral|
1
Question regarding range of addition of trigonometric functions
I am required to find the range of the following trignometric function $$2 \sin^2(x) + 2 \sin(x)$$ My approach to solving the above question has been as follows The range of $2 \sin^2(x)$ must be $[0,2]$ and the range for $2 \sin(x)$ is from [-2, 2] so therefore the range of the function $2 \sin^2(x) + 2 \sin(x) = [-2, 4]$ however the graph for the above function i.e. $2 \sin^2(x) + 2 \sin(x)$ is as follows This is in desmos why is this function from -0.5 to 4 is what i do not understand could someone please explain this to me. Am i missing something out? Any help would be appreictaed!
You cannot just add together the ranges of the two functions, what is true is that if you take the the derivative of $f$ and set it to $0$ you obtain: $$f'(x)=2\sin(x)\cos(x) + \cos(x)=0$$ At this point it's important to realize that if $f(x)$ is $0$ for some value of $x$ , that value cannot be of the form $k\pi$ for $k\in\mathbb{N}$ so the cosine function we find in the derivative is never $0$ . $$\Rightarrow 2\sin(x)=-1 \Rightarrow x=\frac{-\pi}{6}$$ plugging the value in $f$ you get $f(\frac{-\pi}{6})-\frac{-1}{2}$ the maximum is found evaluating in $x=\frac{\pi}{2}$ and is equal to 4. Clearly this function is $2\pi-$ periodic so all the values for $x$ are $(mod\,2\pi)$
|trigonometry|
0
If $u^2 \ge -\dfrac{8}{3}$, then $u \ge -\sqrt{\dfrac{8}{3}}$.
If $u^2 \ge -\dfrac{8}{3}$ , then $u \ge -\sqrt{\dfrac{8}{3}}$ . Is this the correct convention? I was confused because initially I thought the negative sign would go inside the square root, but then that would lead to imaginary numbers. Thanks.
Can you explain to me how you know $~\displaystyle u^2 \geq a \iff -\sqrt{a} \geq -u ~?$ To attack this entire subject analytically, you need a previous result, from the Field Theory of real numbers. Suppose that $~a Then: For any $~c > 0, ~(c \times a) For any $~c (c \times b).~$ The above result is actually based on some prior ideas from the Field Theory of real numbers: $~(a 0.$ If $~r > 0,~$ and $~s > 0,~$ then $~(r \times s) > 0.$ If $~r and $~s then $~(r \times s) > 0.$ If exactly one of $~r,s~$ is positive, and the other one is negative, then $~(r \times s) Now, suppose that $~a > 0,~$ and you are trying to solve the inequality $$x^2 \geq a. \tag1 $$ The simplest approach will be case work. In order to identify all values of $~x~$ that satisfy the inequality in (1) above, you can split your analysis into the following two cases: Case 1 : $~x \geq 0.~$ Case 2 : $~x Note that by convention, for any $~a > 0,~$ you will always have that $~\sqrt{a} > 0.$ $\underline{\text{Case 1 :} ~
|algebra-precalculus|
0
Solve the equation $\left(\frac{1+\sqrt{1-x^2}}{2}\right)^{\sqrt{1-x}} = (\sqrt{1-x})^{\sqrt{1-x}+\sqrt{1+x}}$
Solve in $\mathbb{R}$ : $ \left(\frac{1+\sqrt{1-x^2}}{2}\right)^{\sqrt{1-x}} = (\sqrt{1-x})^{\sqrt{1-x}+\sqrt{1+x}} $ My approach: Let $a = \sqrt{1-x}$ and $b = \sqrt{1+x}$ so $a^2 + b^2 = 2$ . The equation becomes $\left(\frac{1+ab}{2}\right)^a = a^{a+b}$ , which is equivalent to $\left(\frac{1+ab}{a^2+b^2}\right)^a = a^{a+b}$ . After taking the natural logarithm, we get $a \ln(1+ab) - a \ln(a^2+b^2) = a \ln(a) + b \ln(a)$ . I thought of considering a function but I couldn't find it. Any help is appreciated.
I try follow yours ideas, but i first note $$1+ab = \frac{(a+b)^2}{2}$$ So we have $$(2a)In\left(\frac{a+b}{2}\right) = (a+b)In(a)$$ Equivalent to solve $In\left(\frac{a+b}{2}\right)=\left(\frac{a+b}{2a}\right)In(a)$ . This is basically $In(x)=\frac{x}{y}In(y)$ Which, I don’t know to solve, but wolfram solve for $a=b=1$ . Hence $$\sqrt{1-x}=1 \\ \sqrt{1+x}=1$$ Has only $x=0$ , which has the idea of the comment of @DMcMor. Sorry, this is incomplete if we don’t know solve $In(x)=\frac{y}{x}In(y)$ , I read something about W lambert function.
|functions|inequality|logarithms|systems-of-equations|exponential-function|
0
Integrals with residue theory [ANSWERED]
I'm having some problems solving this integral: $$ I = \mathcal{P} \int_{-\infty}^{+\infty} \frac{1-e^{2ix}}{x^2} \ dx$$ where $\mathcal{P}$ is the Cauchy principal value. The exercise suggests to use the fact that: $$I_* = \frac{1}{2} \operatorname{Re} \left[I\right]=\mathcal{P} \int_{-\infty}^{+\infty} \frac{\sin^2 x}{x^2} \ dx$$ since $\sin^2 x = \frac{1}{2} \left(1- \cos(2x)\right)$ . My solution. I went on and tried to solve $I_*$ as follows: I used the fact that the analytic extension of the integrand has no poles, which makes the integral equals to $0$ by using residue theory: $$\lim_{R\to + \infty}\oint_{\Gamma_R} \frac{\sin^2z}{z^2} \ dz = \mathcal{P} \int_{-\infty}^{+\infty} \frac{\sin^2 x}{x^2} \ dx = 0$$ where the second equality is true since $$\oint_{\Gamma_R}\frac{\sin^2z}{z^2} \ dz =\left(\int_{-R}^{+R} + \int_{C_R}\right) \frac{\sin^2z}{z^2} \ dz$$ where $C_R = \{z = r e^{i \theta}\in \mathbb{C} : 0\le r \le R\}$ and $$\left\lvert \int_{C_R} \frac{\sin^2 z}{z^2} \ dz \
I don't know the theoretic context of your exercise, but this is some heavy machinery when you could simply argue that $$ \mathcal{P}\int_\Bbb{R} \frac{1-e^{2ix}}{x^2} \mathrm{d}x = i\pi\, \mathrm{Res}_{z=0}\left(\frac{1-e^{2ix}}{x^2}\right) = 2\pi $$ where the residue of the integrand can be extracted directly from its Laurent series, namely $$ \frac{1-e^{2ix}}{x^2} = -\frac{2i}{x} + 2 + \frac{4}{3}ix + \mathcal{O}(x^2) $$
|integration|complex-analysis|complex-numbers|residue-calculus|cauchy-principal-value|
0
For $(a,b)=1$ and $n\in\mathbb{N}$ s.t. $a^{2^n} + b^{2^n} \equiv 0 \mod p$ prove that $2^{n+1} \mathrel{|} p - 1$
Let $a$ and $b$ be coprime integers and $p$ a prime number, such that for some $n \in \mathbb{N}$ : $$ a^{2^n} + b^{2^n} \equiv 0 \mod p $$ Prove that $2^{n+1}\mid p - 1$ . My attempt: Consider the group $G = (\mathbb{Z}/p\mathbb{Z})^\times$ , $G$ is cyclic and $|G| = p-1$ . $a^{2^n} \equiv -b^{2^n} \mod p \iff a^{2^{n+1}} \equiv b^{2^{n+1}} \mod p$ . Since $G$ is cyclic it means $G = \langle g\rangle$ and $a = g^k$ and $b = g^m$ for some $k$ and $m$ . Then $(g^k)^{2^{n + 1}} \equiv (g^m)^{2^{n+1}} \mod p \iff (g^{k-m})^{2^{n+1}} \equiv 1 \mod p \implies p-1 = |g| \mid (k-m)2^{n+1}$ ... I can't really go anywhere from here and I'm not even sure this is the correct way of approaching the problem. I'm thinking Lagrange's theorem can be used somewhere in the proof but I'd have to construct a subgroup of order $2^{n+1}$ which I'm not sure how to do. Some help would be greatly appreciated.
Steps, in outline: Show $b$ cannot be divisible by $p.$ Solve $bx\equiv 1\pmod p$ Show $(ax)^{2^n}\equiv -1\pmod p.$ Show this means $2^{n+1}$ must be the multiplicative order of $ax$ Conclude $2^{n+1}\mid p-1.$ You don't really need $(a,b)=1.$ This is true if one if one merely assume $p\not\mid b$ or, equivalently $p\not\mid a.$
|group-theory|elementary-number-theory|
0
For $(a,b)=1$ and $n\in\mathbb{N}$ s.t. $a^{2^n} + b^{2^n} \equiv 0 \mod p$ prove that $2^{n+1} \mathrel{|} p - 1$
Let $a$ and $b$ be coprime integers and $p$ a prime number, such that for some $n \in \mathbb{N}$ : $$ a^{2^n} + b^{2^n} \equiv 0 \mod p $$ Prove that $2^{n+1}\mid p - 1$ . My attempt: Consider the group $G = (\mathbb{Z}/p\mathbb{Z})^\times$ , $G$ is cyclic and $|G| = p-1$ . $a^{2^n} \equiv -b^{2^n} \mod p \iff a^{2^{n+1}} \equiv b^{2^{n+1}} \mod p$ . Since $G$ is cyclic it means $G = \langle g\rangle$ and $a = g^k$ and $b = g^m$ for some $k$ and $m$ . Then $(g^k)^{2^{n + 1}} \equiv (g^m)^{2^{n+1}} \mod p \iff (g^{k-m})^{2^{n+1}} \equiv 1 \mod p \implies p-1 = |g| \mid (k-m)2^{n+1}$ ... I can't really go anywhere from here and I'm not even sure this is the correct way of approaching the problem. I'm thinking Lagrange's theorem can be used somewhere in the proof but I'd have to construct a subgroup of order $2^{n+1}$ which I'm not sure how to do. Some help would be greatly appreciated.
remark that both $a^{2^{n}}$ and $b^{2^{n}}$ are invertible in $\mathbb{Z}/p\mathbb{Z}$ (otherwise $p$ would be a common factor),so the first equality can be written as: $$ (ab^{-1})^{2^{n}}=-1[p] $$ hence $$ \vert ab^{-1} \vert ={2^{n+1}} $$ so $2^{n+1} \vert p-1$ (the order of an element divides that of the group).
|group-theory|elementary-number-theory|
0
A concrete example with Arrow-Pratt coefficient of absolute risk aversion
Let $u_1$ and $g$ be increasing strictly concave functions from $\mathbb{R}$ to $\mathbb{R}$ . Let $u_2:=g\circ u_1$ . If we regard $u_1$ and $u_2$ as utility functions of two players, this is saying that player $2$ has Arrow-Pratt coefficient of absolute risk aversion greater than that of player $1$ . Suppose now $X$ is a risky asset, i.e. a (non-constant) random variable. Let $t_1^*=argmax_{t\in [0,1]} \mathbb{E}(u_1(1-t+tX))$ and $t_2^*=argmax_{t\in [0,1]} \mathbb{E}(u_2(1-t+tX))$ . How do I prove that $t_1^*>t_2^*$ ? Intuitively I expect this to be true since it means that the more risk averse player allocates less money to the risky asset.
Maybe it's easier than I thought. We know that $$\mathbb{E}[u'(1-t^*+t^*X)(X-1)]=0$$ and want to prove that for $g$ concave $$\mathbb{E}[g'(u(1-t^*+t^*X))u'(1-t^*+t^*X)(X-1)] This follows simply from the fact that if $X(\omega)>1$ then by concavity of $g$ and monotonicity of $u$ $$g'(u(1-t^*+t^*X(\omega))) so that $$\mathbb{E}[g'(u(1-t^*+t^*X))u'(1-t^*+t^*X)(X-1)\mathbb{1}_{(\ X>1)}] 1)}].$$ Similarly, $$\mathbb{E}[g'(u(1-t^*+t^*X))u'(1-t^*+t^*X)(X-1)\mathbb{1}_{(X Adding up we get the final inequality.
|real-analysis|convex-optimization|finance|economics|utility|
0
The restriction of Lebesgue measure to the $\sigma$-algebra of Borel subsets of $\mathbb{R}$ is not complete.
I am new to measure theory. I am confused by the following claim from Measure Theory by Donald Cohn (Section 1.5 Completeness and Regularity): Claim $\quad$ The restriction of Lebesgue measure to the $\sigma$ -algebra of Borel subsets of $\mathbb{R}$ is not complete. In this post, I will denote the Lebesgue outer measure by $\lambda^*$ . According to the textbook, Definition $\quad$ The restriction of Lebesgue outer measure on $\mathbb{R}$ (or on $\mathbb{R}^d$ ) to $\mathcal{B}(\mathbb{R})$ or to $\mathcal{B}(\mathbb{R}^d)$ is called Lebesgue measure and will be denoted by $\lambda$ . The restriction of Lebesgue outer measure on $\mathbb{R}$ (or on $\mathbb{R}^d$ ) to the collection of Lebesgue measurable subsets of $\mathbb{R}$ (or of $\mathbb{R}^d$ ) is also called Lebesgue measure and will be denoted by $\lambda$ as well. Definition $\quad$ Let $(X,\mathcal{A},\mu)$ be a measure space. The measure $\mu$ (or the measure space $(X,\mathcal{A},\mu)$ ) is complete if the relations $A\i
The answer in question constructs a set, namely $$ B := V \times \{0\} $$ where $V$ is a (one dimensional) vitali set in $[0,1]$ , which is not borel measurable ( $B\notin\mathcal{B}(\mathbb{R}^2)$ ). It is clear that $B\subseteq [0,1]\times \{0\}=:A$ and since $A$ is clearly a zero set with regards to the 2-dimensional Lebesgue measure, $B$ is also Lebesgue measurable as a subset of a Lebesgue measurable zero set. So we have $A\in \mathcal{B}(\mathbb{R}^2)$ , $\lambda(A)=0$ and $B\subseteq A$ but $B\notin \mathcal{B}(\mathbb{R}^2)$ . To show that $B$ is not borel measurable ( $B\notin \mathcal{B}(\mathbb{R}^2)$ ), the answer makes use of the map $$ f: \begin{cases} \mathbb{R}\to \mathbb{R}^2\\ x \to (x,0) \end{cases} $$ which is continuous and therefore Borel measureable. Then $V=f^{-1}(B)$ so $V$ would be measurable if $B$ were measurable, which is a contradiction so $B$ can not be Borel measurable. In essence it shows the following statement in two dimensions, not in one! Let $(\mat
|real-analysis|analysis|measure-theory|proof-explanation|lebesgue-measure|
1
Explanation of the proof of Theorem 6.6 in Rudin's Functional Analysis
Everything that follows is from Rudin's Functional Analysis : $\def\L{\Lambda} \def\DDD{\mathcal{D}} \def\sbe{\subseteq} \def\W{\Omega} \def\RR{\mathbb{R}} \def\CC{\mathbb{C}} $ Below $\DDD$ is the space of test (smooth, compactly supported) functions from an open set $\Omega\sbe\RR^n$ to $\CC$ . Such space is given a complete, unmetrizable topology $\tau$ . Similarly $\DDD_K$ is the space of test functions $\Omega\to\CC$ whose support lies in the compact set $K$ . Each $\DDD_K$ has a Fréchet space topology $\tau_K$ which corresponds with the subspace topology under $\tau$ . Theorem 6.6: suppose $\L$ is a linear mapping of $\DDD$ into a lctvs $Y$ . Then the following are equivalent: a) $\L$ is continuous. b) $\L$ is bounded. c) If $\phi_i\to 0$ in $\DDD(\W)$ , then $\L\phi_i\to0$ in $Y$ . d) The restriction of $\L$ to any $\DDD_K\sbe\DDD(\W)$ is continuous. The proof shows a) $\implies$ b) $\implies$ c) $\implies$ d) $\implies$ a). I struggle to understand the steps b) $\implies$ c) an
Let start with $(b) \to (c)$ As $\mathcal{D}_K$ is the subspace topology of $\mathcal{D}$ if $E$ is a bounded set in $\mathcal{D}_K$ than it is bounded in $\mathcal{D}$ . A linear map $\Lambda : \mathcal{D} \to Y$ between tvs is bounded if, for any $E$ bounded $\Lambda(E)$ is bounded in $Y$ . Assume $\Lambda |_{\mathcal{D}_K}$ is not bounded, than there is $E$ bounded in $\mathcal{D}_K$ such that $\Lambda(E)$ is not bounded, but $E$ is bounded in $\mathcal{D}$ , so this contraddict condition $(b)$ and $\Lambda_{\mathcal{D}_K}$ is bounded Now for $(d) \to (a)$ . The antimage of a linear map of a balanced set is balanced. Indeed, for any $a$ such that $|a|=1$ we have $$ a \Lambda^{-1}(U)=\Lambda^{-1} (aU) \subseteq \Lambda^{-1}(U) $$ The proof that antiimage of convex sets is convex is very similar. So $V$ is convex and balanced. By theorem $6.5$ $(a)$ of the second edition of Rudin functional analysis you have that $V$ is open iif $V \cap D_K \in \tau_K$ . So you have $\Lambda^{-1}(U)$
|functional-analysis|analysis|proof-explanation|distribution-theory|
0