title
string | question_body
string | answer_body
string | tags
string | accepted
int64 |
|---|---|---|---|---|
What is the reason for endowing the positive semidefinite cone with the Affine Invariant Riemannian Metric instead of a Euclidean metric?
|
I came across this post Distances defined in manifold of symmetric positive definite matrices because I had the same questions. I did not understand the answers provided, so I wanted to try to answer it myself. Originally, I posted my justifications and intuitions as an answer, but given I do not have a background in differential geometry and was unsure about my answer, I thought it would be better to post it as a question. The two main points I was trying to address were "I always pictured it as a "curved sheet" like the shell of a sphere but not the sphere itself; therefore, the idea of calling the PSD cone a manifold was confusing" The main question being asked: "if I understand correctly, SPD matrices lie inside a convex set i.e. linear combinations of SPD matrices will be SPD. Intuitively, I don't see the need of defining a distance that goes along the curvature of SPD matrices since the set is dense" Do these justifications/intuitions correctly address those questions: "I always
|
Although it is easier to imagine a manifold as a hyper-surface, a manifold does not need to be in some ambient space of higher dimension to be a manifold. Validity of the 1st sentence depends on its interpretation. In modern topology and differential geometry, manifolds are indeed abstract and are not considered as subsets of some "ambient space." Nevertheless, every manifold $M$ is naturally embedded as a hypersurface in $M\times \mathbb R$ . But, most likely, in the original question it was meant to be a hypersurface in some Euclidean space. Then the answer is that there are, for instance, surfaces, which cannot be embedded in $\mathbb R^3$ , e.g. the projective plane and the Klein bottle. So take the PSD cone formed by the set of the set of 2x2 PSD matrices. This would be a [3 manifold][1] since locally it is diffeomorphic to $\mathbb{R}^3$ . This is correct but, actually, it is not only locally diffeomorphic to $\mathbb{R}^3$ , but is globally diffeomorphic to $\mathbb{R}^3$ . A bi
|
|differential-geometry|metric-spaces|riemannian-geometry|smooth-manifolds|positive-semidefinite|
| 0
|
Let $T \ : \ V\rightarrow V$ be a linear transformation such that $T^2=I$, that is, $T\circ T=I \implies T(T(v))=v$. Show T is an isomorphism.
|
For the case of $V$ being finite-dimensional, let dim $(V)=n$ . Let $v\in V$ and $T(v)=0$ . \begin{align*} T(0)&=v \\ T(T(0))&=T(v)\\ T(T(0))&=0 \\ \implies v&=0 \\ \end{align*} So ker $(T)=\{0\}$ , which implies that $T$ is one-to-one. \ Since we are assuming dim $(V)=n$ , we know by rank-nullity that \begin{align*} dim(V)&=dim(ker(T))+dim(ran(T)) \\ n &= 0+dim(ran(T)) \\ n &= dim(ran(T)) \end{align*} So $T$ is onto and a linear isomorphism in the case of a finite-dimensional vector space. For the infinite-dimensional case, let $V$ be infinite-dimensional and $V_n$ be $n$ -dimensional. Let $v\in V$ . Then $v\in V_n$ for some $n$ . Since $T(v)=0\implies v=0$ for $v\in V_n$ , $T(v)=0\implies v=0$ for $v\in V$ since any $v\in V$ is also in $V_n$ . So ker $(T)=\{0\}$ and $\iota$ is one-to-one over $V$ . Let $w\in V$ , then $w\in V_n$ for some $n$ and we know $\exists v\in V_n$ such that $T(v)=w$ since $T$ is onto over $V_n$ . So we know for any $w\in V$ , $\exists v\in V$ such that $T(v)=
|
It's much easier than that. In order to be an isomorphism, $T$ needs to have an inverse linear map $S:V\to V$ . But the assumption $T^2=I$ simply means that $T$ is an inverse of itself, so in particular an inverse exists. No need to use anything nontrivial, like the rank-nullity theorem.
|
|linear-algebra|linear-transformations|
| 0
|
Chain rule and differentiability of $|x|^2$
|
Going through Thomas Calculus, question 90 in the chapter on Chain Rule: Suppose that $f(x) =x^{2}$ and $g(x) =|x|$ . Then the composites $$ ( f\circ g)( x) =|x|^{2} =x^{2} \ \ \ \ \ \ and\ \ \ \ \ ( g\circ f)( x) =|x^{2} |=x^{2} $$ are both differentiable at $x=0$ even though $g$ itself is not differentiable at $x=0$ . Does this contradict the Chain Rule? I understand that $|x^{2}|$ is differentiable at $x=0$ because the inner function is differentiable. And the outer function receives only non-negative values, so it also turns out to be differentiable. But I don't believe that $f\circ g$ is differentiable. While the expression $|x|^{2}$ can be simplified to $x^{2}$ , these are in fact different functions. And because the inner $g$ isn't differentiable at $x=0$ , the whole composition isn't differentiable at this point. But strictly speaking the question itself doesn't sound correct - it says the composites are both differentiable. But what the author meant is the 3d function $c(x)=x^
|
A function is what it outputs for given input. Different formulas giving the same output correspond to the same function. So here $c=f\circ g=g\circ f$ . Functions $f:X\rightarrow Y$ can be defined in many ways. A common way is to view them as subsets $S$ of $X\times Y$ (such that for every $x$ there is a unique $y$ s.t. $(x,y)\in S$ ). This way we can see that different formulas giving the same outputs are in fact intrinsically the same function.
|
|derivatives|chain-rule|
| 0
|
Any other ways to execute Integration rather than viewing it as the inverse of Differentiation?
|
I'm learning the basics of Calculus. I'm good at differentiating (basic level). But when it comes to integration, it seems tricky as it takes time for me to think of a function for which the given one is the derivative. People seem to use some tricks when integrating two or more functions in multiplication by assigning one as u and representing the other as a derivative of u and so on. My question is - Is there any other intuitively pleasing way to integrate rather than this differentiation-dependent approach, which is difficult to convince my mind as it is not that intuitive?
|
Integration is much more than a clever search for expressions that have the right derivative. The fundamental theorem of calculus is, in a way, the answer to your question. If you can find an antiderivative of some function (using some technique you've learned or a clever trick you stumble on*) then it's easy to find the area under that function over some interval: it's the difference between the values of the antiderivative. If you can't find a nice expression for an antiderivative using tools you know (and sometimes that will be provably impossible) you can always find a not so nice one by laboriously calculating areas under its graph using Riemann sums or any other argument using limits that gives you numerical estimates as good as you wish. (*) If it is really the clever tricks you want to know more about, you can search for integration tricks. Here is one such link.
|
|calculus|integration|derivatives|
| 0
|
Where did I go wrong in solving $y' + 2y = 1$
|
to solve this I first solve the characteristic equation $r+2=0$ and since $r = -2$ I have $y_{1} = e^{-2x}$ . Then I get a particular solution $y_{p} = C$ because number on the right side is just a constant 1 (what is this "number on the right side" called?) plugging $y_{p}=C$ into the original equation I get $2C = 1$ and $C=\frac{1}{2}$ . So my solution to this question is $y_{c}=\frac{1}{2} + e^{-2x}$ . However the correct answer is $y_{c}=\frac{1}{2} - \frac{1}{2} e^{-2x}$ . What did I do wrongly?
|
The complementary function (or general solution, or what you have called $y_1$ ) is $y_1 = Ae^{-2x}$ . You can't determine this constant unless you have a boundary condition. So actually, the correct answer is $y_c = \frac 12 + A e^{-2x}$ , where $A$ is an arbitrary constant that can be determined by a boundary condition. In this case, you should have some extra information that lets you determine $A = -\frac 12$ . We know $y = \frac 12 + Ae^{-2x}$ and $y(0) = 0$ . Substituting $x=0$ and $y=0$ , we have $0 = \frac 12 + A e^{-2(0)} = \frac 12 + A$ , so $A = - \frac 12$ . Thus, $y = \frac 12 - \frac 12 e^{-2x}$ as expected.
|
|ordinary-differential-equations|
| 1
|
Few short questions on notational choice
|
There are few questions on notational choice that seem to come up a number of times. It seems that, while some of this might be somewhat context dependent, it might be useful to get a general idea. I will just briefly go through each of them. (Q1) Suppose we wanted to identify an ordinal by taking some kind of minimum. We would sometime get two choices: (1a) $\min\{x \in \mathrm{Ord}\,|\,\,\mathrm{some\,condition\,involving\,} x\}$ . However, it could also be the case that we could (relatively easily) identify an ordinal $\alpha$ such that the above ordinal could equivalently be written as (1b) $\min\{x \in \alpha\,|\,\,\mathrm{some\,condition\,involving\,} x\}$ . My feeling is that both are basically equally good and which is better is largely context dependent? (Q2) Suppose we want to write a condition like ( $a,b,c,d$ are specific ordinals): (2a) $\forall i \in \mathrm{Ord} \, \forall j \in \mathrm{Ord} \,[((a\leq i . Now suppose we are dealing with a scenario concerning only ordina
|
You've raised some interesting points about notation in set theory and ordinals. Here are my thoughts: (Q1) Both notations, (1a) and (1b), are correct and the choice between them is largely context-dependent. If $\alpha$ is easily identifiable and makes the condition simpler or more intuitive, then (1b) might be preferred. Otherwise, (1a) is a more general notation that doesn't assume the existence of such an $\alpha$ . (Q2) Again, all notations (2a), (2b), and (2c) are correct. The choice depends on the context and personal preference. Notation (2c) is indeed more concise, but it might be less familiar to some readers. The interval notation (2d) is not commonly used for ordinals, but it's understandable if you explain it beforehand. (Q3) The choice between (3a) and (3b) depends on the specific context and the audience's familiarity with the notation. If the notation $\alpha + x$ is clearly defined and does not conflict with other notations, it can be used. However, it's important to n
|
|notation|ordinals|
| 0
|
Why couldn't the duplicate cases be "divided out"?
|
A group of 8 people is to be chosen from three countries, K, L, and M. K has 3 people, L has 4 people, and M has 5 people. What are the number of ways in which the group can be chosen if it includes at least 1 person from each country? I understand that a solution is to take the complement through all possibilities - only K and M - only L and M = 12C8 - 8C8 - 9C8 = 485 However, in my first attempt at this question, I took a more direct approach through 1 from K x 1 from L x 1 from M x 5 from remaining = 3C1 x 4C1 x 5C1 x 9C5 = 7560 This is clearly wrong as all possibilities = 12C8 = 495 I now understand and believe that this is wrong because it counts K1L1M1K2... and K2L1M1K1... as unique and different cases when they are the same. However, I tried to think of a add-on on my answer to "divide out" the duplicate cases. Intuitively to me, there should be and should be a multiple of 2. Yet, when you take 7560/485, it yields a non-whole number. Why is this so? Why can't I "divide out" the
|
Suppose you chose K1L1M1 and then K2M2M3M4M5. This is $8$ people in total. You could interchange K1 with K2 and M1 with M2, M3, M4 or M5 in the sense that you described in your post. So this combination is counted $2\cdot4=8$ times. Now suppose you chose K1L1M1 and then K2L2M2M3M4. You can interchange K1 with K2, L1 with L2, and M1 with M2, M3 or M4. So this combination is counted $2\cdot2\cdot3=12$ times. As you see, different combinations are counted different number of times. So no wonder quotient $7560/485$ is not an integer.
|
|combinatorics|
| 1
|
Integration Inequality with Inner Products
|
Suppose $f: [1, \infty) \rightarrow [0, \infty)$ is continuous. Show that $(\int_{1}^{\infty}f)^{2} \leq \int_{1}^{\infty} x^{2}f(x)^{2}dx$ This problem is from Axler's Linear Algebra Done Right, 4e (problem 6A.18). So far, I've done the following: Considering the inner product $ = \int_{1}^{\infty}fg$ , and multiplying both sides of the inequality by $\int_{1}^{\infty}1$ , we get $\int_{1}^{\infty}1 (\int_{1}^{\infty}f)^{2}\leq ||xf||\cdot||1||$ Using Cauchy-Schwarz and the fact $f$ is nonnegative: $\int_{1}^{\infty}f \leq \int_{1}^{\infty}xf = | | \leq ||xf||\cdot||1||$ I got stuck here, as I don't really know how to combine these inequalities to solve the problem. I would be thankful for feedback about what I've done so far and hints on how to continue. Thanks!
|
Note that $f(x)=\left(\frac1{x}\right)(xf(x))$ . Then, we have $$\begin{align} \left(\int_1^\infty f(x)\,dx\right)^2&=\left(\int_1^\infty \left(\frac1x\right)(xf(x))\,dx\right)^2\\\\ \end{align}$$ Now apply Cauchy-Schwarz.
|
|linear-algebra|cauchy-schwarz-inequality|
| 1
|
Residual Gerbe is Étale Locally a Classifying Stack
|
I heard from someone that, if $x:\text{Spec}k\to\mathcal{X}$ is a point of an Deligne-Mumford stack (algebraic should be OK, I am only assuming Deligne-Mumford so we know the residual gerbe exists), then the residual gerbe $\mathcal{Z}_x$ is étale-locally the classifying stack $\mathcal{B}_k\underline{ \text{Aut}}_k(x)$ , i.e. the stack $[\text{Spec}k/\underline{ \text{Aut}}_k(x)]$ , with the trivial action of $\underline{ \text{Aut}}_k(x)$ , which is the automorphism group scheme of the point $x$ . The same person told me that if the residual gerbe is "split" or "neutralized", then in fact it is the classifying stack. I cannot find a reference for the residual gerbe being étale-locally a classifying stack, nor can I even find a definition of what a "split" or "neutralized" gerbe is (perhaps they mean neutral? i.e. there is a section of $\mathcal{Z}_x \to Z$ , where $Z$ is the algebraic space over which $\mathcal{Z}_x$ is a gerbe?) Sadly, I cannot simply ask this person, as they are no
|
Let $\mathcal{X}$ be the stack in question, and for simplicity, assume $\mathcal{X}$ is defined over $k$ . Let $x:\mathrm{Spec}(k)\to \mathcal{X}$ a point. A residual gerbe is a gerbe, which is the content of stacks project Lemma 06QK. So the residual gerbe, being a gerbe over a field $k$ , has a section after a finite extension of $k$ , say $k'/k$ . A gerbe with a section is a classifying stack, say $BG$ over $k'$ , and this is proved in stacks project 06QG. By stacks project Lemma 06MW, the morphism $x$ factors through the residual gerbe, i.e., $x$ can be expressed as $\mathrm{Spec}(k)\to\mathcal{G}_x\to \mathcal{X}$ . Now $(\mathcal{G}_x)_{k'}\cong BG$ , and since $\mathcal{G}_x\to \mathcal{X}$ is a monomorphism (stacks project Lemma 06MT), $BG\cong (\mathcal{G}_x)_{k'}\to \mathcal{X}_{k'}$ is still a monomorphism, so by stacks project Lemma 06R5, $G$ is the stabilizer group of the point $x_{k'}: \mathrm{Spec}(k)\to \mathcal{X}_{k'}$ . This means that the residual gerbe is a classif
|
|algebraic-stacks|
| 0
|
Convexity at a point
|
This is more likely a fact-finding question. Consider a function $f:[a,b]\rightarrow\mathbb{R}$ and $c\in[a,b]$ . Is there any notion of convexity at the point $c$ ?
|
There are a few notions of convexity at a point that I am aware of: Let $I$ be an open interval and $f$ be a real-valued function defined on $I$ . Then with respect to $I$ a point $x_0 \in I$ is a point of convexity of $f$ if \begin{equation} f(x_0) \le tf(x_1) + (1-t)f(x_2) \end{equation} for any $x_1,x_2 \in I$ and $0 such that $x_0=tx_1 + (1-t)x_2$ . $f$ is convex at $x_0\in I$ if for any $x_1 \in I$ other than $x_0$ , and $0 , \begin{equation} f(tx_0+(1-t)x_1) \le tf(x_0) + (1-t)f(x_1). \label{eq:convexbaz} \end{equation} $f$ is punctually convex (or p-convex, for short) at $x_0\in I$ if \begin{equation} f(x_0) + f(x_1+x_2-x_0) \le f(x_1) + f(x_2) \label{ineq:punctual-convex} \end{equation} whenever $x_0$ is strictly between $x_1,x_2 \in I$ . $f$ is totally convex at $x_0\in I$ if $\varphi(x,x_0):=\dfrac{f(x)-f(x_0)}{x-x_0}$ is an increasing function of $x$ on $I\setminus \{x_0\}$ . Equivalently, $\Psi(x_0,x_1,x_2):=\dfrac{\varphi(x_2,x_0)-\varphi(x_1,x_0)}{x_2-x_1} \ge 0$ for any
|
|real-analysis|functional-analysis|functions|convex-analysis|
| 0
|
Chain rule and differentiability of $|x|^2$
|
Going through Thomas Calculus, question 90 in the chapter on Chain Rule: Suppose that $f(x) =x^{2}$ and $g(x) =|x|$ . Then the composites $$ ( f\circ g)( x) =|x|^{2} =x^{2} \ \ \ \ \ \ and\ \ \ \ \ ( g\circ f)( x) =|x^{2} |=x^{2} $$ are both differentiable at $x=0$ even though $g$ itself is not differentiable at $x=0$ . Does this contradict the Chain Rule? I understand that $|x^{2}|$ is differentiable at $x=0$ because the inner function is differentiable. And the outer function receives only non-negative values, so it also turns out to be differentiable. But I don't believe that $f\circ g$ is differentiable. While the expression $|x|^{2}$ can be simplified to $x^{2}$ , these are in fact different functions. And because the inner $g$ isn't differentiable at $x=0$ , the whole composition isn't differentiable at this point. But strictly speaking the question itself doesn't sound correct - it says the composites are both differentiable. But what the author meant is the 3d function $c(x)=x^
|
In a comment you write A function $f$ from a set $D$ to a set $Y$ is a rule that assigns a unique (single) element $f(x)∈Y$ to each element $x∈D$ . This is a "naive" definition because it does not explain what a rule is. I think you understand it as a sort of an algorithm (or a software program) which can be applied to $x \in D$ and produces an output $f(x) \in Y$ . With this understanding one may indeed think that the assigments $$x \mapsto x^2$$ $$x \mapsto \lvert x \rvert^2$$ $$x \mapsto \lvert x^2 \rvert$$ $$x \mapsto (x+1)^2 - 2x -1$$ are different functions because there are different algorithms to compute the output. However, this is a misundestanding . In mathematics one is usually not interested in the specific steps used to calculate the output, but only in the output itself. As Guillaume Berlat explained in his answer, the modern (though more abstract) point of view is to define a function $f : D \to Y$ as a subset $f \subset D \times Y$ with the property that for each $x \i
|
|derivatives|chain-rule|
| 1
|
Using derivatives to prove an inequality and, as an application, computing a limit
|
Show that for every $x \in \mathbb{R}^{+}$ $$x-\frac{x^{2}}{2} and as an application, compute the limit $$\lim_{n\to \infty}\prod_{k=1}^{n}\left(1+\frac{k}{n^{2}}\right).$$ My attempt: Consider the map $f:\mathbb{R}_{0}^{+} \to \mathbb{R}$ defined by $f(x) = \log (x+1) -x+\frac{x^{2}}{2}$ . Then $f$ is differentiable and $$f'(x) = \frac{1}{1+x} - 1 + x = \frac{x^{2}}{x+1}.$$ Therefore, $f'(x) = 0$ if and only if $x = 0$ and $f'(x) > 0$ for all $x>0$ . This means that $f$ attains its absolute minimum at 0 and so we have $$\log (x+1) -x+\frac{x^{2}}{2} = f(x) > f(0) = 0$$ for every $x>0$ , which implies that for all $x>0$ $$x-\frac{x^{2}}{2} Similarly, define $g: \mathbb{R}_{0}^{+} \to \mathbb{R}$ by $g(x) = x- \log (x+1)$ . Then $g$ is differentiable and $$g'(x) = \frac{x}{x+1}.$$ Hence, a similar argument shows that $g$ attains its absolute minimum at 0 and, for every $x>0$ $$\log(x+1) This shows that $$x-\frac{x^{2}}{2} 0.$$ Now note that, for every $1 \leq k \leq n$ , we have $$\log
|
I agree with @Aig's comment. Moreover, the last line of reasoning ("But then...") seems unclear to me : I would have written : $$\forall n\geq 1, \log( \prod_{k=1}^{n}\left(1+\frac{k}{n^{2}}\right))= \log \left(1+\frac{1}{n^{2}}\right) + \log \left(1+\frac{2}{n^{2}}\right) + \ldots + \log\left(1+\frac{n}{n^{2}}\right)$$ Then $$\lim_{n\to \infty}\prod_{k=1}^{n}\left(1+\frac{k}{n^{2}}\right)=\mathrm e^\frac12$$
|
|sequences-and-series|limits|analysis|derivatives|
| 1
|
Proof that $x \in O \wedge y \in H_x$ is $\Pi_1^1$
|
I am trying to read G.E. Sacks's book on higher order recursion theory, and he has this result (where $O$ is Kleene's $O$ and $H_x$ is a hyperarithmetic set, i.e. where we have sets of the form $H_{2^a} = (H_e)'$ for some $e$ and $'$ is the jump). Chapter 2, Theorem 1.3: Each of the following predicates is $\Pi_1^1$ : (i) $x \in O$ and $y \in H_x$ (ii) $x \in O$ and $y \not \in H_x$ The proof for Theorem 1.3 in his book however only provides a proof for (i), which is to let $A(x)$ be the conjunction of: $(X)_1 = \emptyset$ $ [a \in O \implies (X)_{2^a} = (X)'_a]$ $[3 \cdot 5^e \in O \implies (X)_{3 \cdot 5^e} = \{ \langle x,n\rangle : x \in ( X )_{\{e\}(n)} \}$ Now, Sacks notes that the set $X^* := x \in O \text{ and } y \in H_x$ is the intersection of all solutions of $A(X)$ , and that the Theorem follows from the fact that $x \in O$ is $\Pi_1^1$ and that since $X^*$ is the intersection of all solutions of $A(X)$ , we have that $X^*$ is $\Pi_1^1$ (using Chapter 1, Theorem 1.6 I of his
|
Your definition of $A'$ doesn't do what you need it to do; it doesn't work to just give some set that $(X)_{2^a}$ and $(X)_{3\cdot 5^e}$ aren't equal to — you need to say what they actually are . (As a side note, it's probably best to avoid using notations like $A'$ in arguments involving the jump operator, since that's also denoted by $'$ .) Anyway, just follow Sacks's argument, but instead of using $H_x,$ use its complement $J_x = \omega \setminus H_x.$ These sets can be characterized by an effective transfinite induction on $x\in\mathscr O,$ similar to the one for $H_x:$ $$J_0=\omega$$ $$J_{2^a} = \omega \setminus \big((\omega\setminus J_a)\big)'$$ $$J_{3\cdot 5^e} = \omega\setminus\{ \langle x,n \rangle : x\not\in J_{\{e\}(n)} \}$$ (Here I'm just (1) taking the complement of the previous sets in the $J$ hierarchy to get sets in the $H$ hierarchy, (2) applying the effective transfinite induction in the $H$ hierarchy, and then (3) taking the complement again to get to the appropriate
|
|logic|proof-explanation|computability|higher-order-logic|
| 1
|
Proof that $\frac{2^x}{3}$ can never equal an integer for all integers x
|
This is probably quite simple but bear in mind that I am a beginner. As part of a larger proof, I need to show that $\frac{2^x}{3}$ can never equal an integer, for any integer $x$ . As a follow up question, is there a general method I can apply to show that other related formulae can also never equal an integer, such as $\frac{2^x - 1}{6}$ , $\frac{2^x-3}{9}$ and $\frac{2^x-1}{18}$ .
|
(2 ^ x) / 3 = k 2 ^ x = 3k xlog(2) = log(3k) x = log(3k) / log(2) x = (log(3) + log(k)) / log(2) x = ( log(3) / log(2) ) + ( log(k) / log(2) ) First term is irrational, second term is irrational for any integer k except rational for k=2 (irrational + rational is still irrational). Therefore because irrational + irrational = irrational, and irrational + rational = irrational, x cannot be rational for any value of k Therefore x cannot be an integer when k is an integer.
|
|integers|
| 0
|
Behavior of function $\sum_{j = n}^\infty \frac{\sin^2((2j-1) \pi x)}{(2j-1)^2}$
|
For a positive integer $n$ , define the function $$ F_n(x) = n^2 \sum_{j = n}^\infty \frac{\sin^2((2j-1) \pi x)}{(2j-1)^2}. $$ I am trying to understand the behavior of $F_n(x)$ in the following sense. For a positive exponent $\alpha$ , I would like to compute the limit $$ L_\alpha = \lim_{n \to \infty} F_n(1/n^\alpha). $$ Based on plotting in Mathematica, it appears that $L_\alpha$ has the following behavior: it seems to be $0$ for $\alpha > 2$ , infinite for $\alpha and some finite number for $\alpha = 2$ . I tried to simplify the summation but I am having difficulty passing to the limit.
|
Let me verify the case $\alpha = 2$ . Using the identity $\sin^2(x) = \frac{1}{2}(1 - \cos(2x))$ and the well known sum $\sum_{j = 1}^\infty \frac{1}{(2j-1)^2} = \frac{\pi^2}{8}$ , write $$ \begin{align*} n^{-2}F_n(x) &= \sum_{j = n}^\infty \frac{1}{2(2j-1)^2} - \sum_{j = n}^\infty\frac{\cos(2 \pi (2j - 1) x)}{2(2j-1)^2} \\ &= \frac{\pi^2}{16} -\sum_{j=1}^{n-1} \frac{1}{2(2j-1)^2} -\sum_{j = n}^\infty\frac{\cos(2 \pi (2j - 1) x)}{2(2j-1)^2} \\ \end{align*} $$ Now recall the Fourier series expansion of $|x|$ for $x \in (-\frac{1}{2}, \frac{1}{2})$ : $$ |x| = \frac{1}{4} - \sum_{j=1}^\infty \frac{2\cos(2\pi(2j-1)x)}{\pi^2(2j -1)^2} $$ Substituting this in the above, we obtain $$ \begin{align*} n^{-2}F_n(x) &= \frac{\pi^2}{16} -\sum_{j=1}^{n-1} \frac{1}{2(2j-1)^2}+ \frac{\pi^2}{4}|x| -\frac{\pi^2}{16}+\sum_{j=1}^{n-1} \frac{\cos(2\pi(2j-1)x)}{2(2j -1)^2} \\ &= \frac{\pi^2}{4}|x| -\sum_{j=1}^{n-1} \frac{1 -\cos(2\pi(2j-1)x)}{2(2j -1)^2} \\ \end{align*} $$ Thus $$ \begin{align*} F_n(n^{-2})
|
|limits|inequality|asymptotics|trigonometric-series|
| 0
|
We have $n$ balls and we randomly allocate them to $n$ buckets with equal probability. What is the probability that exactly one bucket remains empty?
|
I have a question with my reasoning (or lack thereof) -- I understand what the correct answer is, but I do not see the gap in my logic. Premise: We have $n$ balls and we randomly allocate them to $n$ buckets with equal probability. What is the probability that exactly one bucket remains empty? My Attempt: Let us start by determining the probability of allocating $n-1$ balls among $n$ buckets such that there is exactly one empty bucket. As $n\over{n}$ of the buckets are initially empty, the first ball is always placed in an empty bucket (i.e. probability of $n\over{n}$ ). After placing the first ball, ${n-1}$ of $n$ buckets are empty, so the probability of placing the 2nd ball into an empty bucket is ${n-1}\over n$ . This continues on for the $n-1$ balls as follows: $$\text{P(Exactly One Bucket Empty)}= { n\over n} \times { n-1\over n} \times { n-2\over n} \times ... \times { 2\over n} = {n! \over n^{n-1}}$$ Now we can multiply that probability by the probability that we allocate the fi
|
From the $n$ labelled balls, pick one to double up. Arrange the remaining $n-1$ and pick a hole. Finally place the double. This gives a probability of $$\frac{n(n-1)!n(n-1)}{2n^n}$$ with the factor of $2$ coming from the double appearing twice in each calculation.
|
|probability|combinatorics|
| 0
|
compute the following integral in closed form : $\int_0^{\frac{π}{2}}\frac{x}{(1+\sqrt 2)\sin^{2}(x)+8\cos^{2} x}dx$
|
Evaluate $I=\int_0^{\frac{π}{2}}\frac{x}{(1+\sqrt 2)\sin^{2} (x)+8\cos^{2} x}dx$ How can I starte in this hard integral , at first use $y=\frac{π}{2}$ but no result so I this use : $y=\tan \frac{x}{2}$ then $dx=2\frac{1}{1+y^2}dy$ $x=2\arctan y$ $\cos x=\frac{1-y^2}{1+y^2}$ $&$ $\sin x=2\frac{y}{1+y^2}$ So : $8\cos^{2} x+(1+\sqrt 2)sin^{2} x=\frac{8(1-y^2)^2+4(1+\sqrt 2)y^2}{(1+y^2)^2}$ Now I get $arctan$ integral $I=2\int_0^{\infty}\frac{(1+y^2)\arctan y}{8(1-y^2)^2+4(1+\sqrt 2)y^2}dy$ But I don't know how to complete this work!
|
Let $a=\sqrt{8(\sqrt2-1)}$ \begin{align} &\int_0^{\frac{π}{2}}\frac{x}{(1+\sqrt 2)\sin^{2} (x)+8\cos^{2} x}dx\\ \overset{ibp}=& \ \frac a8\int_0^{\frac{π}{2}}\cot^{-1}\frac{\tan x}{a}dx = \frac a8 \int^\infty_{\frac{1}{a}}\frac{\ln t}{t^2-1}dt \\ =& \ \frac a{16}\bigg(\frac{\pi^2}3 +\text{Li}_2(\frac{a-1}a)+ \text{Li}_2(\frac{1}a)-\ln a \ln\frac{a+1}a\bigg)\\ \end{align}
|
|integration|definite-integrals|closed-form|
| 0
|
Expectation of a function of multiple non-identical independent exponentially distributed random variables
|
I'm currently writing a set of notes on coalescent theory and I'm writing an example on how to directly calculate the probability of a given rooted genealogical tree. This is done by writing an expression for the probability in terms of the waiting time and then solving that. We have that $T_n \sim $ Exp $\left(\binom{n}{2}\right)$ And the expression I need to take the expectation of is $\frac{\theta^4}{2^4}\left(e^{-\theta\frac{2T_4 + 3T_3 + 2T_2}{2}}\left(T_3T_2(T_4+T_3)(T_4+T_3+T_2)\right) \right)$ Where $\theta$ is some constant representing the mutation rate. I'm totally lost on how to calculate this directly. A solution or resources on how to do this myself would be very much appreciated. Thank you
|
If $T$ is exponential with mean $m$ then $X=T/m$ is exponential with mean 1. Therefore your complicated exponential can be written $e^{aX_1+bX_2+cX_3}$ and its expectation is $\frac{1}{(1-a)(1-b)(1-c)}. $ You are interested in the expectation of the product of this exponential by a complicated polynomial, therefore enough is to compute for suitable integers $\alpha,\beta,\gamma$ $$E(e^{aX_1+bX_2+cX_3}X_1^{\alpha}X_2^{\beta}X_3^{\gamma})$$ $$=(\frac{d}{da})^{\alpha}\frac{1}{1-a}\times (\frac{d}{db})^{\beta}\frac{1}{1-b}\times (\frac{d}{dc})^{\gamma}\frac{1}{1-c}$$ $$=\frac{\alpha!}{(1-a)^{\alpha+1}}\times \frac{(\beta!}{(1-b)^{\beta+1}}\times \frac{\gamma!}{(1-c)^{\gamma+1}}$$
|
|probability|expected-value|
| 1
|
I have got stucked with this concept of A.P.
|
Q) How to prove that the sequence: $ 2,4,6,8,...,1000$ is an A.P. $($$Arithmetic$ $Progression$ )? First of all, the $1^{st}$ term of this sequence is $2$ and the common difference of this sequence is also $2$ . Therefore this sequence is in A.P. Doubt: I can't understand that how we simply tell that the given series is in A.P. with $1^{st}$ term as $2$ and common difference $2$ . For e.g. in this given sequence it can might be that $'555'$ is a term of this given sequence. Then how can we say that the sequence $2,4,6,8,...,1000$ is in A.P. if $555$ is a term of this sequence ? Please clear my doubt.
|
When you write $2,4,6,8,…$ it is usually meant that there is some obvious pattern to the sequence. And the reader can easily guess, what the sequence is. In this case, it is obvious that the arithmetic progression with the first term $2$ and the difference $2$ is meant. If an author meant another sequence, they should have described their sequence differently. This is why it is sometimes better to avoid such way of defining a sequence. And just write “arithmetic progression with the first term $2$ and the difference $2$ ”. So that there is no doubt what exactly is meant. Your wording of the question ( How to prove that this is an arithmetic progression ) is a little bit not suitable. The question is rather what is meant by this record .
|
|sequences-and-series|arithmetic-progressions|
| 0
|
We have $n$ balls and we randomly allocate them to $n$ buckets with equal probability. What is the probability that exactly one bucket remains empty?
|
I have a question with my reasoning (or lack thereof) -- I understand what the correct answer is, but I do not see the gap in my logic. Premise: We have $n$ balls and we randomly allocate them to $n$ buckets with equal probability. What is the probability that exactly one bucket remains empty? My Attempt: Let us start by determining the probability of allocating $n-1$ balls among $n$ buckets such that there is exactly one empty bucket. As $n\over{n}$ of the buckets are initially empty, the first ball is always placed in an empty bucket (i.e. probability of $n\over{n}$ ). After placing the first ball, ${n-1}$ of $n$ buckets are empty, so the probability of placing the 2nd ball into an empty bucket is ${n-1}\over n$ . This continues on for the $n-1$ balls as follows: $$\text{P(Exactly One Bucket Empty)}= { n\over n} \times { n-1\over n} \times { n-2\over n} \times ... \times { 2\over n} = {n! \over n^{n-1}}$$ Now we can multiply that probability by the probability that we allocate the fi
|
Using the product of two multinomial coefficients gives a mechanical way (sparing the labor of thought) to compute arrangements of distinct balls in distinct boxes $\text{[Lay Down Pattern] x [Permute]}$ With $n=5$ as an example, the number of arrangements where exactly $1$ box is enpty will be given by $= \Large\frac{5!}{2!1!1!1!0!}\times\frac{{5!}}{1!3!1!}$ which (ignoring factorials of $1$ and $0$ ) generalises to $\Large\frac{n!n!}{2!(n-2)!} = \frac{n!n(n-1)}{2}$ and dividing by total possible arrangements $n^n$ , $Pr = \Large\frac{{n!}n(n-1)}{2n^n} = \frac{(n-1)!(n-1)}{2n^{n-2}}$ You can see clearly why $2$ comes in the denominator, and why the exponent of $n$ is $(n-2)$
|
|probability|combinatorics|
| 0
|
A locally bounded family must not be a bounded family
|
I have read that a locally bounded family is not a bounded family, and that the following family shows that: $\mathcal{F} = \{ f_n : n \in \mathbb{N} \}$ , $f_n : \mathbb{E} \to \mathbb{C}$ , $f_n(z) =n z^n$ . Where $\mathbb{E} = \{z \in \mathbb{C} : \vert z \vert (So that this is a locally bounded family but not a bounded family) Unfortunately I am stuck at both claims. The definitions are as follows: Def. bounded family: A familiy $\mathcal{F} \subset \mathcal{O} (D)$ is called bounded in a subset $A \subset D$ , if there exists a real number $M > 0$ s.t. for all $f \in \mathcal{F}$ it holds that $\vert f \vert _A \leq M$ . Def locally bounded family: A family $\mathcal{F} \subset \mathcal{O} (D)$ is called locally bounded in $D$ if for each point $z \in D$ there exists a neighborhood $U \subset D$ s.t. $\mathcal{F}$ is bounded in $U$ . What confuses me the most is that on one hand we have the times $n$ in $f_n(z) =n z^n$ , on the other hand there is the to the power of $n$ . So as $
|
Claim 1: $\mathcal{F}$ is not bounded. Proof: To show this, we have to show, that there exists no $M>0$ s.t. for all $f \in \mathcal{F}$ it holds that $\vert f \vert _A \leq M$ , or equivalently that $\sup\limits_{f \in \mathcal{F}} \sup \limits_{z \in \mathbb{E}} \vert f(z) \vert = \infty$ . For a fixed $f\in \mathcal{F}$ we have $f=f_n$ for some $n\in \Bbb N$ , and: $$ \sup \limits_{z \in \mathbb{E}} \vert f(z) \vert = \sup \limits_{z \in \mathbb{E}} n \vert z^n \vert = n \cdot 1^n = n $$ Then $$ \sup\limits_{f \in \mathcal{F}} \sup \limits_{z \in \mathbb{E}} \vert f(z) \vert = \sup\limits_{n\in \Bbb N} n = \infty$$ Claim 2: $\mathcal{F}$ is locally bounded. Proof: To show this, we have to show that for each point $z \in D$ there exists a neighborhood $U \subset D$ s.t. $\mathcal{F}$ is bounded in $U$ . In this claim we use the following Lemma: A family $\mathcal{F} \subset \mathcal{O}(\mathbb{B})$ in a ball $B_r(c)$ , $r>0$ is locally bounded in $B$ iff $\mathcal{F}$ is bounded in e
|
|functional-analysis|
| 1
|
Why must the order of $7$ either be $5$ or $10$ in $\mathbb{Z}_{11}^*$?
|
I have an old math exam question with the solution included, but there is a certain step of the solution I don't understand. Task: Determine the order of $7$ in $\mathbb{Z}_{44}^*$ Solution: From the Chinese Remainder Theorem, $\mathbb{Z}_{44}^*$ $\cong$ $\mathbb{Z}_4^*\times \mathbb{Z}_{11}^*$ . Therefore, it suffices to determine the orders of 7 in both factors $\mathbb{Z}_4^*$ and $\mathbb{Z}_{11}^*$ . In $\mathbb{Z}_4^*$ , the order is obviously $2$ (because $7 \not\equiv 1 \ \textrm{mod} \ 4$ but $7^2 = 49 ≡ 1 \ \textrm{mod} \ 4)$ , and hence ord( $7$ ) in $\mathbb{Z}_{44}^*$ is an even number. In the second factor, we have $7^2 = 72 = 49 ≡ 5 \not\equiv 1 \ \textrm{mod} \ 11$ . So the order is not $2$ , but either $5$ or $10$ . In both cases, the order (in $\mathbb{Z}_{44}^*$ ) must then be a multiple of $5$ , and therefore can only be $10$ . Question: How do you suddenly come to the conclusion that the order of $7$ must either be $5$ or $10$ in $\mathbb{Z}_{11}^*$ without explici
|
Let say $\operatorname{ord}(7)$ is not a divisor of $10$ , for example like $6$ as you said, then we have $$7^{\operatorname{ord}(7)}=7^6\equiv 1 \mod 11\tag{*}$$ And by Fermat Little Theorem, $7^{10}\equiv 1\mod 11$ , so squaring both sides on $(*)$ give $7^{12}\equiv 7^2\equiv 1^2=1\mod11$ , which is contradicts with the order of $7$ . This is also known as Lagrange Theorem in group theory, where the order of an element must divide the order of the group.
|
|group-theory|chinese-remainder-theorem|
| 1
|
Understanding $\frac{\text{d}u}{\text{d}t}$ vs. $\frac{\text{D}u}{\text{D}t}$
|
I'm working on an unassessed course problem. The setup might be unnecessary for my question, but I give it in case it helps. An incompressible fluid of constant density $\rho$ has velocity $\boldsymbol{u}(x,t)$ . For fluid occupying a closed volume $V$ bounded by a surface $S$ , with outward normal unit vector $\boldsymbol{n}$ , deduce that $$\frac{\text{d}}{\text{d}t}\int_V\rho u_i\text{ d}V=\int_S\sigma_{ij}n_j\text{ d}S+\int_V\rho F_i\text{ d}V,$$ where $\sigma_{ij}$ is the stress tensor, and $\boldsymbol{F}$ is the body force per unit mass. Use Gauss' divergence theorem with $f_j=\sigma_{ij}a_i$ , where $\boldsymbol{a}$ is an arbitrary constant vector, to deduce that $$\rho\frac{\text{D}u_i}{\text{D}t}=\frac{\partial\sigma_{ij}}{\partial x_j}+\rho F_i,$$ throughout the fluid. The solution booklet contains the line, $\rho\text{ d}V$ is the mass of a fluid element, and does not change as the element moves around, so $$\frac{\text{d}}{\text{d}t}\int_V\rho u_i\text{ d}V=\int_V\left(\fr
|
OP answering own question. I think I found the answer. $\frac{\text{d}}{\text{d}t}$ and $\frac{\text{D}}{\text{D}t}$ are used as equivalent variant notation for the same operator.
|
|multivariable-calculus|physics|fluid-dynamics|
| 0
|
Bound on convolution: $ | (h * f^2) (x)| \leq \| f\|^2_2 g(h)$
|
I am trying to find bounds for the following quantity. Take two functions $f,h \in L^{1} \cap L^2$ but $\|h \|_{\infty} = \infty$ . Is there a way to obtain a bound of the following type: $$ | (h * f^2) (x)| \leq \| f\|^2_2 g(h)$$ where $g$ is some function of $h$ ? This would be the case if we could take the sup norm of $h$ out of the integral by using Holder's inequality, but in this case is not allowed as $\|h \|_{\infty} = \infty$ . PS: you can also assume that $(h * f^2) (x)$ is everywhere well-defined.
|
The given conditions do not imply $h * (f^2)\in L^\infty$ , so no bound of the type can hold. Let $\phi_a: \mathbb R\to [0,\infty]$ be defined by $\phi_a(x) = 1/|x|^a$ . Pick any $a_{1},a_2>0$ such that $a_1 and $1-a_1\le a_2 , and put $h = \phi_{a_1} \mathbf 1_{[-1,1]}$ , $f=\sqrt{\phi_{a_2}} \mathbf 1_{[-100,100]}$ . By construction $h,f\in L^1\cap L^2$ . Now for say $|x| , $$ h * (f^2)(x)= \int_{-1}^1 h(y) f^2(x-y)dy = \int_{-1}^1 \frac{dy}{|y|^{a_1}|y-x|^{a_2}}, $$ and $h * (f^2)(0)=\infty$ since $a_1+a_2\ge 1$ . So $h * (f^2)$ is not bounded. In fact, by triangle inequality and monotone convergence, $h * (f^2)(x)\to \infty$ as $x\to 0$ . Hence, $h * (f^2)\notin L^\infty$ . The point of blowup can be easily translated to any point; if $\tau_c f (x)= f(x-c)$ is translation by $c$ , then note $$ h* (\tau_cf)^2 = \tau_c(h*(f^2))$$ and now $h* (\tau_cf)^2$ blows up at $x=c$ . Since $c$ can be chosen arbitrarily and independently from $h$ , it is not even possible to have the bound $|h*
|
|real-analysis|functional-analysis|measure-theory|convolution|holder-inequality|
| 0
|
Show that $f(x, y)=(-1)^y x$ is an isomorphism from $\mathbb{R}^+\times \mathbb{Z}_2$ to $\mathbb{R}^*$.
|
Show that $f(x, y)=(-1)^y x$ is an isomorphism from $\mathbb{R}^{+} \times \mathbb{Z}_2$ to $\mathbb{R}^*$ . (REMARK: To combine elements of $\mathbb{R}^{+}\times \mathbb{Z}_2$ , one multiplies first components, adds second components.) Conclude that $\mathbb{R}^* \cong \mathbb{R}^+ \times \mathbb {Z}_2$ . I am stuck how to prove that the function is a bijection. I can show it is a surjection. Showing that it is an injection gives me an equation of $x$ and $y$ . I think fact that $y$ only takes the values of 0 and 1 is essential here. Should I start with $y_1=y_2=0$ then 1 then show that $x_1$ = $x_2$ ?
|
It's a homomorphism due to index rules. You have shown it's surjective. Suppose $f(a,b)=f(x,y)$ . Then $(-1)^ba=(-1)^yx$ . Then, since $a,x>0$ , $a=x$ . It follows that $b=y$ since $b,y\in\{0,1\}$ (which is how I presume you conceptualise $\Bbb Z_2$ ). Thus $(a,b)=(x,y)$ . Thus $f$ is injective.
|
|abstract-algebra|group-theory|group-isomorphism|
| 1
|
Prove that the following function is one-to-one
|
Define a function $g$ from the set of real numbers to $S$ by the following formula: $$ g(x) = \frac12\biggl( \frac x{1+|x|} \biggr) + \frac12,\quad x\in\mathbb{R}. $$ Prove that $g$ is a one-to-one correspondence. (It is possible to prove this statement either with calculus or without it.) What conclusion can you draw from this fact? My question is that what is the conclusion we can draw after we decide that it is a one-to-one correspondence? I would prove its one-to-one correspondence through its graph, which is one-to-one in that no two $x$ 's are mapped to the same $y$ .
|
That would mean that $$\forall x \in \mathbb{R}\,\,g(x) \in \mathbb {R}$$ And more importantly: $$ x = g^{-1}(g(x))$$ Meaning that the function is invertible . You can find all the conclusions here: https://en.wikipedia.org/wiki/Bijection
|
|algebra-precalculus|functions|discrete-mathematics|
| 0
|
$\int_{-\infty}^\infty \frac{1}{x^5+1}dx$ using contour integration.
|
I am wondering if I have correctly computed this integral, which I see in a lot of posts as being really hard. $\int_{-\infty}^\infty \frac{1}{x^5+1}dx$ . Consider the following contour: The poles of $\frac{1}{z^5+1}$ occur at $e^{i\left(\frac{\pi}{5} + \frac{2\pi}{5} n\right)}$ where $n=0,1,2,3,4$ . Our contour only includes the poles $n=0,1$ . First, we compute the outermost circle. We parameterize the integral with $\gamma_1(t) = Re^{it}-1$ where $0\leq t \leq \pi$ . $$\int_{\gamma_1}\frac{1}{z^5+1}dz = \int_0^{\pi}\frac{Rie^{it}}{(Re^{it}-1)^5+1}dt.$$ Then taking $R \to \infty$ , since the $R$ term in the denominator dominates, $$\lim_{R\to \infty} \left|\int_0^{\pi}\frac{Rie^{it}}{(Re^{it}-1)^5+1}dt \right|\leq \lim_{R\to\infty}\pi \sup_{t}\left|\frac{Rie^{it}}{(Re^{it}-1)^5+1}\right|= 0$$ and now for the inner circle $\gamma_2(t) = \epsilon e^{-i\theta}-1$ $0\leq \theta \leq \pi$ , by the binomial formula, $$\lim_{\epsilon\to 0} \int_0^{\pi}\frac{-\epsilon ie^{it}}{(\epsilon e^{i
|
To check your answer (when you get it)... Maple evaluates $$ \int_{-1+a}^{\infty }\! \left( {x}^{5}+1 \right) ^{-1}\,{\rm d}x+\int_ {-\infty }^{-1-a}\! \left( {x}^{5}+1 \right) ^{-1}\,{\rm d}x $$ where $a>0$ . The answer is half a page long with some arctangents and square-roots in it. The limit as $a \to 0^+$ is $$ {\frac {\pi\,\sqrt {10-2\,\sqrt {5}} \left( \sqrt {5}+3 \right) }{20}} \approx 1.93377 $$
|
|calculus|complex-analysis|complex-numbers|contour-integration|complex-integration|
| 1
|
Need Help with Particle Motion Analysis in $\mathbb{R}^2$
|
I'm currently working on a problem related to particle motion in the plane $( \mathbf{R}^2 )$ and could use some guidance. The particle moves along a curve, and at any given time $( t )$ seconds after it starts, its position is described by the vector $( \mathbf{r}(t) = (2 \cos \pi t, \sin \pi t) )$ , with the units in meters. I have two specific questions: a) What type of curve does the particle move along? I’m trying to visualize the path based on the parametric equations given but could use some confirmation or guidance on how to accurately describe the curve. b) How do I determine the particle's velocity, speed, and acceleration at the specific time $( t = 2 )$ seconds? I understand that velocity is the derivative of the position vector, and speed is the magnitude of the velocity, but I'm having trouble applying these concepts to find the acceleration as well and to calculate these quantities at $( t = 2 )$ .
|
The curve parametrized by $$ \mathbf{r}(t) = \begin{pmatrix} x(t) \\ y(t) \end{pmatrix} = \begin{pmatrix} 2\cos(\pi t) \\ \sin(\pi t) \end{pmatrix} $$ describes an ellipse with half-axes $2$ and $1$ respectively, because $\frac{x^2}{4} + y^2 = 1$ . The velocity and the acceleration are computed straightforwardly by differentiating component-wise; one has thus : $$ \mathbf{v}(t) = \mathbf{\dot{r}}(t) = \frac{\mathrm{d}}{\mathrm{d}t}\mathbf{r}(t) = \begin{pmatrix} \dot{x}(t) \\ \dot{y}(t) \end{pmatrix} = \begin{pmatrix} -2\pi\sin(\pi t) \\ \pi\cos(\pi t) \end{pmatrix} $$ and $$ \mathbf{a}(t) = \mathbf{\dot{v}}(t) = \begin{pmatrix} -2\pi^2\cos(\pi t) \\ -\pi^2\sin(\pi t) \end{pmatrix} = -\pi^2\,\mathbf{r}(t), $$ which is actually the equation of motion of a harmonic oscillator of frequency $\pi$ .
|
|trigonometry|analytic-geometry|parametric|differential|kinematics|
| 0
|
Commutator of an abelian normal subgroup and the cyclic group generated by x
|
I'm working through the book "The Theory of Finite Groups" by Hans Kurzweil and Bernd Stellmacher. Section 1.6, Exercise 1 is: Let $A$ be an abelian normal subgroup of $G$ and $x \in G$ . Show $[A, \langle x \rangle]$ = $\{[a, x] \mid a \in A\}$ . Defns: Commutator: $ [x, y]=x^{-1} y^{-1} x y$ $[A, \langle x \rangle]= \langle \{[a, x^i] \mid a \in A, x^i \in \langle x \rangle\} \rangle$ As far as I've gotten: Obviously { $[a, x] \mid a \in A$ } is contained in $ [A, \langle x\rangle]. $ I need to show that $[a, x^i] = [a', x] $ for some $a' \in A.$ I've tried $a'= [a, x^{i-1}], $ and a number of other variations, but I can't find the trick that makes it work. Thank you for any help.
|
This can be shown by noting \begin{equation*} [a,x][b,x] = [a,x]^b[b,x]= [ab,x] \end{equation*} The middle equality follows because $[a,x]\in A$ , which is Abelian. This together with \begin{equation*} [a,x^{i+j}] = [a,x^j][a,x^i]^{x^j} = [a,x^j][a^{x^j},x^i] \end{equation*} is enough (do you see why?)
|
|abstract-algebra|group-theory|
| 0
|
Why is the Inner Product Induced by a Gaussian Matrix a Gaussian Process?
|
Let $A \in \mathbb{R}^{m \times n}$ be a Gaussian matrix such that $A_{ij} \sim N(0, 1)$ i.i.d. We define $$ X_{uv} := \langle Au, v \rangle. $$ Then it is claimed that $\{ X_{uv} \}_{(u,v) \in T}$ , where $T := S^{n - 1} \times S^{m - 1}$ is a Gaussian process. This should be easily observable, I am not sure I am seeing how. It is understandable that $X_{uv} \sim N(0, 1)$ through computation, but in order for $\{ X_{uv} \}_{(u,v) \in T}$ be a Gaussian process, we need to show $\sum_{(u, v) \in T_0} a_{uv}X_{uv}$ is a normal distribution for any $T_0 \subseteq T$ finite set. I do not see why this is true as $X_{uv}$ are not independent across different $u$ and $v$ .
|
With your notation $X_{uv}$ is a linear combination of $A_{ij}$ , namely $X_{uv}=\sum_{1\le i\le m,1\le j\le n}v_iu_jA_{ij}$ . So $$ \sum_{(u,v)\in T_0}a_{uv}X_{uv}=\sum_{(u,v)\in T_0}\sum_{1\le i\le m,1\le j\le n}a_{uv}v_iu_jA_{ij}=\sum_{1\le i\le m,1\le j\le n}\sum_{(u,v)\in T_0}a_{uv}v_iu_jA_{ij} $$ which is a sum of independent Gaussian variables. Hence it is Gaussian.
|
|real-analysis|probability-theory|normal-distribution|random-walk|random|
| 1
|
Can all $4a(1-b)$, $4b(1-c)$, $4c(1-d)$ and $4d(1-a)$ be greater than $1$?
|
Let $a$ , $b$ , $c$ and $d$ be arbitrary positive real numbers. Can all the products $4a(1-b)$ , $4b(1-c)$ , $4c(1-d)$ and $4d(1-a)$ be greater than $1$ ? Any idea how to solve this, because I don't know anything else except that $a,b,c,d>0$ and $4a(1-b)$ , $4b(1-c)$ , $4c(1-d)$ , $4d(1-a)>1$ , if and only if such numbers exist.
|
Let all these values be greater than $1$ : $$\begin{cases} 4a(1-b)>1\\4b(1-c)>1\\4c(1-d)>1\\4d(1-a)>1\end{cases}$$ Then their product is greater than $1$ : $$4a(1-b)\times 4b(1-c)\times 4c(1-d)\times 4d(1-a)>1.$$ The square root of it is also greater than $1$ : $$\sqrt{4a(1-b)\times 4b(1-c)\times 4c(1-d)\times 4d(1-a)}>1.$$ Reshuffle the factors a bit: $$\sqrt{4a(1-a)}\times \sqrt{4b(1-b)}\times \sqrt{4c(1-c)}\times \sqrt{4d(1-d)}>1$$ $$2\sqrt{a(1-a)}\times 2\sqrt{b(1-b)}\times 2\sqrt{c(1-c)}\times 2\sqrt{d(1-d)}>1.\tag1$$ Note that $\sqrt{a(1-a)}$ exists, since $a(1-a)$ is always positive. Indeed, $a>0$ and $d(1-a)>0$ , so $1-a>0$ . The same with the other square roots. Now, due to AM-GM we have: $$\begin{cases} a+(1-a)\ge2\sqrt{a(1-a)}\\ b+(1-b)\ge2\sqrt{b(1-b)}\\ c+(1-c)\ge2\sqrt{c(1-c)}\\ d+(1-d)\ge2\sqrt{d(1-d)} \end{cases}$$ Or $$\begin{cases} 1\ge2\sqrt{a(1-a)}\\ 1\ge2\sqrt{b(1-b)}\\ 1\ge2\sqrt{c(1-c)}\\ 1\ge2\sqrt{d(1-d)} \end{cases}$$ Multiplying those we get: $$1\ge 2\sqrt{a(
|
|real-analysis|inequality|
| 1
|
Asymptotics of expression involving floor functions
|
Consider two functions $f_1(n)$ and $f_2(n)$ that grow to infinity at the same speed, say $$\lim_{n\to \infty} \frac{f_1(n)}{f_2(n)}=c $$ for some $c>0$ . I am studying the squared difference of $f_1$ and $f_2$ . Can the squared difference of the floor functions of $f_1$ and $f_2$ be estimated by that of the original functions? In other words, my question is: does there exist some constant $C>0$ such that $$ ( \lfloor f_1(n) \rfloor - \lfloor f_2(n) \rfloor )^2 \leq C (f_1(n) - f_2(n) )^2 $$ for n large enough? It is obvious that $\lfloor f(n) \rfloor$ grows like $f(n)$ , but the difference term makes things more complicated.
|
This holds if $c\ne1$ . If $c=1$ , then you need to ensure that the difference goes to infinity. Otherwise, you could have a situation like $f_1(n)=n$ , $f_2(n)=n-1/n$ .
|
|inequality|asymptotics|ceiling-and-floor-functions|
| 0
|
Evaluate the series which looks like a telescopic series but isn't one?
|
Evaluate $$\sum_{r=0}^\infty\frac{1}{(3r+1)(3r+2)}$$ Wolfram alpha gives the answer $\pi/(3\sqrt{3})$ so I know for sure that this requires multiple mathematical concepts like the taylor series which isn't the telescopic series. I think complex numbers and the cube root of unity may have something to do with it due to the tricyclic nature of the series but I can't lay my finger on the approach.
|
You are correct that the series does not telescope. That said, a partial fraction decomposition can be used to evaluate the series: $$\frac{1}{(3n+1)(3n+2)} = \frac{1}{3n+1} - \frac{1}{3n+2}$$ hence consider the function $$f(z) = \sum_{n=0}^\infty z^{3n} = \frac{1}{1-z^3}, \quad |z| whose antiderivative is $$F(z) = \sum_{n=0}^\infty \frac{z^{3n+1}}{3n+1} = \int_{t=0}^z \frac{1}{1-t^3} \, dt, \quad |z| Similarly, $$\sum_{n=0}^\infty \frac{z^{3n+2}}{3n+2} = \int_{t=0}^z \frac{t}{1-t^3} \, dt, \quad |z| Put together, $$S(z) = \sum_{n=0}^\infty \frac{z^{3n+1}}{3n+1} - \frac{z^{3n+2}}{3n+2} = \int_{t=0}^z \frac{1-t}{1-t^3} \, dt, \quad |z| Taking the limit as $z \to 1^+$ we obtain $$\begin{align} S &= \sum_{n=0}^\infty \frac{1}{(3n+1)(3n+2)} \\ &= \int_{t=0}^1 \frac{1}{1+t+t^2} \, dt \\ &= \left[\frac{2}{\sqrt{3}} \tan^{-1} \frac{1+2t}{\sqrt{3}} \right]_{t=0}^1 \\ &= \frac{\pi}{3\sqrt{3}}. \end{align}$$
|
|sequences-and-series|complex-numbers|taylor-expansion|
| 1
|
Evaluate the series which looks like a telescopic series but isn't one?
|
Evaluate $$\sum_{r=0}^\infty\frac{1}{(3r+1)(3r+2)}$$ Wolfram alpha gives the answer $\pi/(3\sqrt{3})$ so I know for sure that this requires multiple mathematical concepts like the taylor series which isn't the telescopic series. I think complex numbers and the cube root of unity may have something to do with it due to the tricyclic nature of the series but I can't lay my finger on the approach.
|
Here is one way. Let $$ f(z) = \sum_{r=0}^\infty \frac{z^{3r+1}}{3r+1},\qquad g(z) = \sum_{r=0}^\infty \frac{z^{3r+2}}{3r+2}, $$ for $|z| , so that our answer is $L = \lim_{z\to 1^-}\big(f(z)-g(z)\big)$ . Now complute $$ f'(z) = \sum_{r=0}^\infty z^{3r} = \frac{1}{1-z^3},\qquad g'(z) = \sum_{r=0}^\infty z^{3r+1} = \frac{z}{1-z^3} . $$ Integrate these to recover $f$ and $g$ : $$ f(z) = -{\frac {\pi\,\sqrt {3}}{18}}-{\frac {\ln \left( 1-z \right) }{3}}+{ \frac {\ln \left( {z}^{2}+z+1 \right) }{6}}+{\frac {\sqrt {3}}{3} \arctan \left( {\frac { \left( 2\,z+1 \right) \sqrt {3}}{3}} \right) } \\ g(z) = {\frac {\pi\,\sqrt {3}}{18}}-{\frac {\ln \left( 1-z \right) }{3}}+{ \frac {\ln \left( {z}^{2}+z+1 \right) }{6}}-{\frac {\sqrt {3}}{3} \arctan \left( {\frac { \left( 2\,z+1 \right) \sqrt {3}}{3}} \right) } \\ f(z)-g(z) = -{\frac {\pi\,\sqrt {3}}{9}}+{\frac {2\,\sqrt {3}}{3}\arctan \left( { \frac {\sqrt {3} \left( 2\,z+1 \right) }{3}} \right) } \\ L = \lim_{z\to 1^-}\big(f(z)-g(z)\big) = -\frac
|
|sequences-and-series|complex-numbers|taylor-expansion|
| 0
|
Algebra of pseudo-differential operators
|
The class of pseudod-ifferential operators form an associative algebra of Fourier integral operators. Moreover, given symbols $a,b,c\in C^\infty$ (each associated to some pseudo differential operator), for the composition of symbols (#) there holds: $$\text{op}(a)\circ \text{op}(b) = \text{op}(a\# b),$$ so, obviously, $\#$ should be an associative operation as well. The latter is equivalent to prove for $$(a \# b)(x,y)=\sum_{|\alpha|\geq 0} \frac 1 {\alpha!} \partial_y^\alpha a(x,y) D_x^\alpha b(x,y)\quad (1)$$ there holds $(a\#b)\# c = a\#(b\# c)$, where $D=-i\partial$. To check this, I first defined $\circ_N$ which considers in $(1)$ only the multi-indexes up to length $N$, i.e. $$(a \circ_N b)(x,y)=\sum_{|\alpha|\leq N} \frac 1 {\alpha!} \partial_y^\alpha a(x,y) D_x^\alpha b(x,y).$$ Now, using the general Leibniz product rule I get $$LS := (a\circ_N b) \circ_N c = \sum_{|\alpha|\leq N} \frac 1 {\alpha!} \partial_\xi^\alpha (\sum_{|\beta|\leq N}) \partial_\xi^\beta a D_x^\beta b) D_x
|
Certainly @WillieWong's answer directly responds to the question. It may be worth adding that, yes, indeed, there are complications in proving associativity of composition of pseudo-differential operators in the Kohn-Nirenberg model of them, as being on $\mathbb R^n$ . Keyword for the composition of of the K-N model of pseudo-differential operators: "Moyal product" (well-known to physicists, I gather). That is, I have to say that it was a great relief to me when I learned (from some marvelous writing of G. Folland) that Weyl's presentation of pseudo-differential operators, as related to Heisenberg groups (as opposed to just $\mathbb R^n$ ) made the composition simply be the convolution of operators attached to functions on the group. So, from general principles, we have associativity.
|
|analysis|associativity|pseudo-differential-operators|
| 0
|
compute the following integral in closed form : $\int_0^{\frac{π}{2}}\frac{x}{(1+\sqrt 2)\sin^{2}(x)+8\cos^{2} x}dx$
|
Evaluate $I=\int_0^{\frac{π}{2}}\frac{x}{(1+\sqrt 2)\sin^{2} (x)+8\cos^{2} x}dx$ How can I starte in this hard integral , at first use $y=\frac{π}{2}$ but no result so I this use : $y=\tan \frac{x}{2}$ then $dx=2\frac{1}{1+y^2}dy$ $x=2\arctan y$ $\cos x=\frac{1-y^2}{1+y^2}$ $&$ $\sin x=2\frac{y}{1+y^2}$ So : $8\cos^{2} x+(1+\sqrt 2)sin^{2} x=\frac{8(1-y^2)^2+4(1+\sqrt 2)y^2}{(1+y^2)^2}$ Now I get $arctan$ integral $I=2\int_0^{\infty}\frac{(1+y^2)\arctan y}{8(1-y^2)^2+4(1+\sqrt 2)y^2}dy$ But I don't know how to complete this work!
|
Consider $$I := \int_0^{\frac\pi2} \frac{x \,dx}{\alpha^2 \sin^2 x + \beta^2 \cos^2 x}.$$ Changing variables to $$u = \tan x$$ transforms the integral to $$\int_0^\infty \frac{\arctan u \,du}{\alpha^2 u^2 + \beta^2}.$$ Now, consider the family $$I(\lambda) := \int_0^\infty \frac{\arctan \lambda u \,du}{\alpha^2 u^2 + \beta^2}.$$ of integrals; in particular, $I(1) = I$ . Differentiating under the integral sign gives $$I'(\lambda) = \int_0^\infty \frac{du}{(\lambda^2 u^2 + 1) (\alpha^2 u^2 + \beta^2)} = \frac{\log \frac\beta\alpha + \log \lambda}{\beta \lambda^2 - \alpha^2} .$$ Now, $I(\infty) := \lim_{\lambda \to \infty} I(\lambda) = \frac\pi2 \int_0^\infty \frac{du}{\alpha^2 u^2 + \beta} = \frac{\pi^2}{4 \alpha \beta}$ , so $$ I(\infty) - I = \frac{\pi^2}{4 \alpha \beta} - I .$$ On the other hand, the F.T.C. gives $$I(\infty) - I = \int_1^\infty \frac{\log \frac\beta\alpha + \log \lambda}{\beta^2 \lambda^2 - \alpha^2} \,d\lambda$$ and evaluating this integral, e.g., using the identity
|
|integration|definite-integrals|closed-form|
| 0
|
Questions about $\mathbb Z/30\mathbb Z$
|
I'm interested in the ring $(\mathbb Z/30\mathbb Z,+,\times)$ , the elements of which I'll write down in bold. For example $$\textbf{7}=7+30\mathbb Z=\{...-83,-53,-23,7,37,67...\}.$$ I'm interested in particular with $$U:=(\mathbb Z/30\mathbb Z)^\times=\{\textbf{1},\textbf{7},\textbf{11},\textbf{13},\textbf{17},\textbf{19},\textbf{23},\textbf{-1}\}$$ because every prime except $2,3$ and $5$ does belong to one of theses classes of numbers. There is a lot of things to say about $\mathbb Z/30\mathbb Z$ , but those are the types of objects that interest me : $$\boxed{J\overset{def1}=\{p\in (\mathbb Z/30\mathbb Z)^\times|p+\textbf{6}\in (\mathbb Z/30\mathbb Z)^\times \}}$$ For example, $\textbf{19}\notin J$ $$\boxed{G\overset{def2}=\{p\in (\mathbb Z/30\mathbb Z)^\times|6p+\textbf{1}\in (\mathbb Z/30\mathbb Z)^\times\}}$$ About $J$ and $G$ , there's a result that I find interesting about involution defined below $$\boxed{\varphi:U\to U, u\mapsto u^{-1}\text{ induces bijection }\varphi_{/J}:J
|
Since I have not received a reply, I do not think it is inappropriate to propose names to $J$ and $G$ ( in my answer, I will not use bold notation but the usual misuse of notation.) To make the name I give them understood, it is necessary to generalize. Let $n,m\in \mathbb N^*$ . $$U_n:=(\mathbb Z/p_n\#\mathbb Z)^\times$$ $$J_{n,m}:=\{p\in U_n|p+p_m\#\in U_n\}$$ $$G_{n,m}:=\{p\in U_n|p_m\#.p+1\in U_n\}$$ Examples : $$p_1\#=2, p_2\#=p_1.p_2=2\times 3=6$$ $$J_{n,1}:=\{\color{red}p\in U_n|\color{red}{p+2}\in U_n\}, G_{n,1}:=\{\color{red}p\in U_n|\color{red}{2p+1}\in U_n\}$$ $$J=J_{3,2}, G=G_{3,2}$$ Defining these objects and studying them is obviously inspired by Sophie G ermain's numbers and twins( J umeaux in French). So I call them "m-order twins in $\mathbb Z/p_n\#\mathbb Z$ " and "m-order Germains". Hence my $J$ and $G$ notations of the original post. I hope that this will be a little clearer, as my questions are still open. Coefficients $$\alpha_m=\frac{|J_{n,m}|}{|J_{n,1}|}$$ do no
|
|modular-arithmetic|terminology|primorial|
| 0
|
Concept issue regarding the locus defined by $\arg\left(\frac{z-1}{z}\right) = \frac{\pi}{2}$
|
I had this simple statement: $$\arg\left(\frac{z-1}{z}\right) = \frac{\pi}{2}$$ I was interested in the locus of $z$ . What my understanding is angle between the line segment joining $\overline{OA}$ and $\overline {OB}$ is $\frac{\pi}{2}$ where $O$ is origin, $A$ is $(x-1, y)$ , and $B$ is $(x,y)$ . Using " $\text{product of slopes} = -1$ ", I get the locus of $z$ as $x^2+y^2-x=0$ , which is a circle centered at $(\frac{1}{2},0)$ and passing through the origin. But intuitively speaking, only $(\frac{1}{2}, -\frac{1}{2})$ should be a solution, and no other coordinate should satisfy. What am I not considering here?
|
Let's first consider solving it directly. Then discuss some point you miss. Since $\arg\left(\dfrac{z-1}{z}\right)=\dfrac{\pi}{2}$ , this means $\dfrac{z-1}{z}\in\mathbb{R}^+i$ , this means there is $k\in\mathbb{R}^+$ such that $$\dfrac{z-1}{z}=ki\iff z=\dfrac{1}{1-ki}=\dfrac{1+ki}{1+k^2}$$ So what is the locus of this set? You can see $(x,y)=\left(\dfrac{1}{1+k^2},\dfrac{k}{1+k^2}\right),$ we have $k=\dfrac{y}{x}$ , so $$x=\dfrac{1}{1+\frac{y^2}{x^2}}\implies x^2+y^2=x;\quad x,y>0$$ i.e. the upper semicircle excluding the origin. The point you are missing is that $z$ is lying on that semicircle, but $z-1$ is not . Possibly you can guess that $z-1$ is lying on the upper semicircle $x^2+y^2=-x$ , and they have the same $y$ -coordinate. So all points on the upper semicircle is possible (excluding the $x$ -axis).
|
|complex-numbers|locus|
| 0
|
What is the empty tensor product of vector spaces?
|
The tensor product of a space with itself once is $V^{\otimes1}$ , but what is $V^{\otimes0}$ ? Since it is an empty tensor product, it is - a fortiori - an empty product. So I'm looking for a " $1$ " of some sort, just not sure what that would mean in this context. "If I take the tensor product of a vector space with itself zero times, I would get ...", and I am guessing here, but is it the underlying field, $\mathbb{F}$ ?
|
$ \newcommand\F{\mathbb F} \newcommand\Tensor{\mathop{\textstyle\bigotimes}} $ Tensor powers are characterized as the "universal source" of multilinear maps. If $f : V^k \to W$ is multilinear then there is a unique linear extension $f'$ of $f$ to $V^{\otimes k} \to W$ , with "extension" meaning $$f'(v_1\otimes\dotsb\otimes v_k) = f(v_1, \dotsc, v_k).$$ It will be important to be more precise. "The $k^{\text{th}}$ -tensor power" is more properly a function $\iota$ with domain $V^k$ , codomain a vector space $V^{\otimes k}$ , and which satisfies the above universal property. In the notation above, $$ \iota(v_1,\dotsc,v_k) = v_1\otimes\dotsb\otimes v_k. $$ We need to give meaning to a "multilinear map $V^0 \to W$ ". We could say $V^0 = \varnothing$ ; a multilinear map is a map that is linear in each of its arguments, so if the are no arguments then it is automatically multilinear. Then $f$ is the unique map $\varnothing \to W$ and also $\iota : \varnothing \to V^{\otimes0}$ . This puts no
|
|tensor-products|tensors|multilinear-algebra|
| 0
|
Restatement of complementary slackness
|
Complementary slackness states that if $x$ and $y$ are solutions to primal and dual respectively, then they satisfy: $x$ and $y$ are optimal solutions to primal and dual respectively if and only if they are feasible to primal and dual respectively and they satisfy the complementary slackness condition. My professor claimed that it follows that: for any solution $x$ to primal, $x$ is an optimal solution to primal if and only if there exists $y$ that is feasible for dual and satisfies the complementary slackness condition. I don't understand why it is true. One direction " $x$ is an optimal solution to primal only if there exists $y$ that is feasible for dual and satisfies the complementary slackness condition" is true because of strong duality. But I don't understand the other direction. If there exists $y$ that is feasible for dual and satisfies the complementary slackness condition, how do we justify that $x$ is optimal?
|
When solving either the primal or dual models for an optimal solution, we are effectively trying to close the duality gap between the models (see the Duality Theorems section). Recall the formal definitions of primal and dual linear programs: \begin{matrix} & \text{Primal} & \text{Dual} \newline & \max z = c^Tx & \min w=b^Ty \newline \text{Subject to:} & Ax\le b & A^Ty \ge c \newline & x\ge0 & y \ge 0 & \end{matrix} The duality theorem suggests that the constraints in one problem are tied to the variables of another problem via this primal-dual relationship ( see Page $7$ ). If the primal variable, $x>0$ , the constraint in the corresponding dual problem is binding; otherwise, if $x=0$ , it is not binding. Likewise, if $y>0$ , the corresponding constraint in the primal is binding; otherwise, if $y=0$ , it is not binding. The duality gap states that if an optimal solution for either problem is found, then $c^Tx = b^Ty$ . Therefore, complementary slackness is a direct result of this beha
|
|linear-programming|duality-theorems|
| 0
|
Torus and homeomorphism
|
Let $0 be fixed. For $(\alpha,\beta)\in[0,2\pi]^2$ we consider $$ g(e^{i\alpha},e^{i\beta}) = ((a+r\cos\alpha)\cos\beta,(a+r\cos\alpha)\sin\beta,r\sin\alpha) $$ I would like to prove it is an homeomorphism from $S^1 \times S^1$ to $g(S^1 \times S^1)$ . The fact it is onto is clear. The fact the map is continuous is clear since each coordinate function is continuous. The conclusion should follow from the fact that a bijective and continuous map from a compact metric space to another is an homeomorphism. However I have some difficulty to prove it is injective. I tried to usual machinery by considering $(\alpha_1,\beta_1)$ and $(\alpha_2,\beta_2)$ such that $g(e^{i\alpha_1},e^{i\beta_1})=g(e^{i\alpha_2},e^{i\beta_2})$ but I get nothing interesting using the trigonometry formula I know and I wonder if it is the good way to prove the fact it is objective. I would like to know if there is another way I should look for, more « economical », to prove that the map is injective please. Thank you
|
Say we have some $(\alpha_1,\beta_1)$ and $(\alpha_2,\beta_2)$ such that $g(e^{i\alpha_1},e^{i\beta_1})=g(e^{i\alpha_2},e^{i\beta_2})$ , and let $g_1,g_2,g_3$ denote the component functions of $g$ . By $$g_3(e^{i\alpha_1},e^{i\beta_1})=g_3(e^{i\alpha_2},e^{i\beta_2})$$ we get $\sin(\alpha_1)=\sin(\alpha_2)$ , and by $$[g_1(e^{i\alpha_1},e^{i\beta_1})]^2+[g_2(e^{i\alpha_1},e^{i\beta_1})]^2=[g_1(e^{i\alpha_2},e^{i\beta_2})]^2+[g_2(e^{i\alpha_2},e^{i\beta_2})]^2$$ we obtain $(a+r\cos(\alpha_1))^2=(a+r\cos(\alpha_2))^2$ . As $0 we have $a+r\cos(\alpha_1)>0$ and $a+r\cos(\alpha_2)>0$ , and so $$a+r\cos(\alpha_1)=a+r\cos(\alpha_2)\implies \cos(\alpha_1)=\cos(\alpha_2)$$ As $\alpha_1,\alpha_2\in [0,2\pi]$ and $\sin(\alpha_1)=\sin(\alpha_2), \space\cos(\alpha_1)=\cos(\alpha_2)$ we must have either $\alpha_1=\alpha_2$ or one of the angles be $0$ and the other $2\pi$ . In any case, $e^{i\alpha_1}=e^{i\alpha_2}$ . Dividing $g_1$ and $g_2$ by $a+r\cos(\alpha_1)=a+r\cos(\alpha_2)$ yields $$\cos(\be
|
|general-topology|analysis|trigonometry|exponential-function|
| 1
|
Calculation about $dg \wedge \star df$.
|
I'm trying to calculate about $dg \wedge \star df$ . By the definition of Hodge-star, I write $$\star df= \left(\partial_\mu f\right) g^{\mu \nu} \sqrt{|g|} \epsilon_{\nu \rho_1 \ldots \rho_{n-1}} d x^{\rho_1} \wedge \ldots \wedge d x^{\rho_{n-1}} $$ and $$dg=\partial_i g dx^i.$$ But I had problem calculating $$(\partial_i g dx^i) \wedge \left(\partial_\mu f\right) g^{\mu \nu} \sqrt{|g|} \epsilon_{\nu \rho_1 \ldots \rho_{n-1}} d x^{\rho_1} \wedge \ldots \wedge d x^{\rho_{n-1}},$$ is there more direct way to calculate?
|
By definition of the Hodge- $*$ , $\alpha\wedge *\beta$ is just $g(\alpha,\beta)$ times the volume form holds for all pairs of $k$ -forms, where $g$ is the induced metric. So in the case of one-forms, this is just the inverse metric, so in your situation, you simply get $$dg\wedge *df=\left(g^{\mu\nu}(\partial_\mu g)(\partial_\nu f)\right) \sqrt{|g|}dx^1\wedge\dots\wedge dx^n.$$ This follows quickly from your computation using that $dx^i\wedge dx^{\rho_1}\wedge\dots\wedge dx^{\rho_{n-1}}=\epsilon_{i\rho_1\dots\rho_{n-1}}dx^1\wedge\dots\wedge dx^n$ .
|
|differential-geometry|
| 1
|
Prove that the Lie bracket of two symplectic vector fields is a Hamiltonian vector field
|
I am working on: Prove that the Lie bracket of two symplectic vector fields is a Hamiltonian vector field. More precisely, show that for all $X, Y ∈ \scr X(M, \omega)$ we have $i_{[X,Y ]}\omega = −dH$ where $H = \omega(X, Y )$ My attempt: My definition of Hamiltonian and symplectic vector fields are: A vector field $X \in \scr {X}(M)$ , is Hamiltonian if $i_X\omega$ is exact, that is, there exists a smooth map $H : M \to \Bbb R$ such that $i_X\omega = −dH$ A vector field $X \in \scr X(M)$ , is symplectic if $i_X\omega$ is closed as a one-form, i.e $d i_X\omega=0$ To prove the statement I will use the following formula $i_{[X,Y ]}\omega = d(i_Xi_Y \omega) + i_Xdi_Y (\omega) − i_Y di_X(\omega) − i_X(i_Y (d\omega))$ so: since $\omega$ is symplectic, $d\omega$ is closed, $d\omega=0$ and by definition of symplectic vector field $di_Y (\omega)=0$ = $di_X (\omega)$ $i_{[X,Y ]}\omega = d(i_Xi_Y \omega)+0+0+0= d(i_Xi_Y \omega)= d i_X\omega(Y,\cdot)=d \omega (X,Y)$ So my problem is that I am mis
|
You did the interior multiplication in the wrong order. Since $\omega$ is a 2-form, we have that $i_Y\omega$ is a 1-form characterized by: $$ (i_Y\omega)(V)=\omega(Y,V). $$ Similarly, since $i_Y\omega$ is a 1-form, $i_X(i_Y\omega)$ is a 0-form (i.e., a function) characterized by: $$ i_X(i_Y\omega)=(i_Y\omega)(X). $$ But now, putting these together, we see $$ i_X(i_Y\omega)=(i_Y\omega)(X)=\omega(Y,X)=-\omega(X,Y). $$ In general: if $\theta$ is an $n$ -form, then $i_{V_k}\cdots i_{V_1}\theta$ is an $(n-k)$ -form characterized by $$ (i_{V_k}\cdots i_{V_1}\theta)(V_{k+1},\ldots,V_n)=\theta(V_1,\ldots,V_n). $$
|
|differential-geometry|lie-algebras|smooth-manifolds|differential-forms|symplectic-geometry|
| 1
|
Justification for the definition of A/R using the power set axiom
|
I'm currently doing a question from the Foundations of Mathematics written by Kenneth Kunen for self-teaching purposes, and I'm currently stuck on a question that asks me to justify the definition of A/R using the power set axiom and not the replacement axiom. The steps I have done so far (and I'm not sure if I've done them correctly to be honest) are as follows: We are given $ A/R \subseteq P(A) $ $ [x] $ is defined as $ [x] = \{ y \in A \wedge y R x\}$ and the Power set is defined as $ P(x) = \{x: x \subseteq A \} $ $\forall x \in A/R \rightarrow x \in P(A) $ as $A/R \subseteq P(A)$ $x$ is then a subset for which $\forall a \in x \rightarrow a \in A$ as $x \in P(A)$ $[x] \in A/R \rightarrow [x] \in P(A) $ by definition Which leads to $\forall y \in [x] \rightarrow y \in A $ as $[x] \in P(A)$ From here I'm lost as to how to proceed. I think I can use restricted comprehension to prove that the set containing [x] exists since I have one condition fulfilled as $[x] \in P(A)$ , but I don'
|
I don't have Kunen's book to hand. I take it that $A/R$ is intended to represent the set of equivalence classes of the (equivalence) relation $R$ on a set $A$ . Then you have: $$ A/R = \{ C \in P(A) : \exists x \in A . \forall y . (y \in C \Leftrightarrow x \mathrel{R} y)\} $$ I.e., $A/R$ comprises all sets $C$ that, for some $x$ , comprise precisely the elements $y$ that are equivalent under $R$ to $x$ . The power set axiom and the axiom of comprehension (which Kunen may be calling restricted comprehension) imply the existence of the set on the right-hand side of the above equation - the replacement axiom is not required.
|
|first-order-logic|axioms|
| 1
|
Proving $\left\lfloor \frac{x}{ab} \right\rfloor = \left\lfloor \frac{\left\lfloor \frac{x}{a} \right\rfloor}{b} \right\rfloor$ for $a,b>1$
|
I'm trying to prove rigorously the following: $$\left\lfloor \frac{x}{ab} \right\rfloor = \left\lfloor \frac{\left\lfloor \frac{x}{a} \right\rfloor}{b} \right\rfloor$$ for integers $a,b \geq 1$ and real $x$ . So far I haven't gotten far. It's enough to prove this instead: $$\left\lfloor \frac{z}{c} \right\rfloor = \left\lfloor \frac{\lfloor z \rfloor}{c} \right\rfloor$$ for integers $c \geq 1$ and real $z$ since we can just put $z=\lfloor x/a \rfloor$ and $c=b$ .
|
I think following approach is simple and straight-forward. For real number $x ≥ 0$ and integers $a,b > 0$ Let n be some integer such that $n ≤ x/ab . This implies $n*b ≤ x/a . (after multiplying every term by b) Now, LHS = $⌊/⌋ = n$ RHS = $⌊⌊/⌋/⌋ = ⌊(n*b)/⌋ = ⌊n⌋ = n$ LHS = RHS. Hence, proved.
|
|algebra-precalculus|number-theory|ceiling-and-floor-functions|
| 0
|
Formal proof of equality of ordered pairs
|
I am trying to prove with natural deduction the following with the Kuratowski definition of ordered pair: $$\forall x, y, z, w(\langle x, y\rangle=\langle z, w\rangle\leftrightarrow(x=z\land y=w))$$ I can prove one direction, but I'm having trouble with the other one. So far, this is what I have:
|
Seems like you have the $\gets$ direction OK. I'm not going to be much help with a formal proof, since I haven't done that since HS. But here's the idea: If $\left\langle x,y \right\rangle = \left\langle z,w \right\rangle$ , then $\left\{ \left\{ x \right\},\left\{ x,y \right\} \right\}=\left\{ \left\{ z \right\},\left\{ z,w \right\} \right\}$ . So, the two sets are equal. Sets are equal only if they contain the same elements. That means, e.g., that $\{x\}\in \{\{z\},\{z,w\}\}$ . How can that be? Which of $\{z\}$ or $\{z,w\}$ can $\{x\}$ be equal to? Why?
|
|elementary-set-theory|logic|set-theory|formal-proofs|
| 0
|
Understanding $\frac{\text{d}u}{\text{d}t}$ vs. $\frac{\text{D}u}{\text{D}t}$
|
I'm working on an unassessed course problem. The setup might be unnecessary for my question, but I give it in case it helps. An incompressible fluid of constant density $\rho$ has velocity $\boldsymbol{u}(x,t)$ . For fluid occupying a closed volume $V$ bounded by a surface $S$ , with outward normal unit vector $\boldsymbol{n}$ , deduce that $$\frac{\text{d}}{\text{d}t}\int_V\rho u_i\text{ d}V=\int_S\sigma_{ij}n_j\text{ d}S+\int_V\rho F_i\text{ d}V,$$ where $\sigma_{ij}$ is the stress tensor, and $\boldsymbol{F}$ is the body force per unit mass. Use Gauss' divergence theorem with $f_j=\sigma_{ij}a_i$ , where $\boldsymbol{a}$ is an arbitrary constant vector, to deduce that $$\rho\frac{\text{D}u_i}{\text{D}t}=\frac{\partial\sigma_{ij}}{\partial x_j}+\rho F_i,$$ throughout the fluid. The solution booklet contains the line, $\rho\text{ d}V$ is the mass of a fluid element, and does not change as the element moves around, so $$\frac{\text{d}}{\text{d}t}\int_V\rho u_i\text{ d}V=\int_V\left(\fr
|
Pedantic, I know, but fluids are my area of expertise so I really can't let this go. $\frac{\mathrm d}{\mathrm dt}$ and $\frac{\mathrm D}{\mathrm Dt}$ are not equivalent operators. The operator $\frac{\mathrm d}{\mathrm dt}$ is a single-variable operator defined for functions $\mathbf F:t\mapsto \mathbf F(t)$ only, whereas the material derivative $\frac{\mathrm D}{\mathrm Dt}$ acts on functions of a time and a space variable, $\mathbf F:(t,\boldsymbol x) \mapsto \mathbf F(t,\boldsymbol x)$ . The material derivative is defined as $$\frac{\mathrm D}{\mathrm Dt}=\partial_t+\boldsymbol u\cdot \nabla$$ The two are only "equivalent" in the case that we consider the streamlines of the vector field $\boldsymbol u$ , that is, if we consider the trajectory $\boldsymbol x(t)$ of a particle in the velocity field $\boldsymbol u$ , i.e, $\boldsymbol x$ satisfies $$\dot{\boldsymbol x}(t)=\frac{\mathrm d}{\mathrm dt}\boldsymbol x(t)=\boldsymbol u(t,\boldsymbol x(t))\tag{1}$$ Then it is true that $$\le
|
|multivariable-calculus|physics|fluid-dynamics|
| 1
|
Derivative notation in section 1.2 of Stein-Shakarchi's Complex Analysis
|
I'm reading about holomorphic functions in section 1.2 of Complex Analysis by Stein and Shakarchi, and I am pretty confused about the derivative notation that the authors employ. In this section the authors derive the Cauchy-Riemann equations for a complex-valued function $f: \Omega \to \mathbb{R}$ , where $\Omega$ is an open subset of $\mathbb{C}$ . The following passage from page 11 has me confused: ...consider the limit in (1) [ $\lim_{h \to 0} \frac{f(z_0+h) - f(z_0)}{h}$ ] when $h$ is first real, say $h = h_1 + ih_2$ with $h_2 = 0$ . Then, if we write $z = x + iy, \, z_0 = x_0 + iy_0$ , and $f(z) = f(x,y)$ , we find that \begin{align*} f'(z_0) &= \lim_{h_1 \to 0} \frac{f(x_0 + h_1, y_0) - f(x_0,y_0)}{h_1} \\[5pt] &= \frac{\partial f}{\partial x}(z_0), \end{align*} where $\partial/\partial x$ denotes the usual partial derivative in the $x$ variable. (We fix $y_0$ and think of $f$ as a complex-valued function of the single real variable $x$ .) Now taking $h$ purely imaginary, say $h
|
On page 11 of my copy, the authors of speak of "associating each complex-valued function $f=u+iv$ with the mapping $F(x,y)=(u(x,y),v(x,y))$ from $\mathbb R^2$ to $\mathbb R^2$ ". This suggests that when they use notation like $\frac{\partial f}{\partial x}(z_0)$ for functions $\mathbb C\to\mathbb C$ , a string of natural identifications are taking place in the background. If we write $z_0=x_0+iy_0$ with $x_0,y_0$ real, then the corresponding vector in $\mathbb R^2$ is $(x_0,y_0)$ . The notation $\frac{\partial f}{\partial x}(z_0)$ thus actually means "the complex number identified with $\frac{\partial F}{\partial x}(x_0,y_0)$ ". Explicitly, $\frac{\partial f}{\partial x}(z_0)$ is the complex number identified with the following vector in $\mathbb R^2$ : $$ \frac{\partial F}{\partial x}(x_0,y_0)=\lim_{h\to 0}\frac{F(x_0+h,y_0)-F(x_0,y_0)}{h} \, . $$ If you want a more direct definition of $\frac{\partial f}{\partial x}(z_0)$ , it is $$ \lim_{h\to 0}\frac{f(z_0+h)-f(z_0)}{h} \, , $$ wher
|
|complex-analysis|complex-numbers|cauchy-riemann-equations|
| 1
|
A Question Regarding a Norm over Tempered Distributions
|
I am having trouble understanding a definition regarding tempered distributions from Bahouri's "Fourier Analysis and Nonlinear PDEs". In definiton 1.26, the author defines the subspace $\mathcal{S}^{'}_h$ of tempered distributions through a condition related to an $L^\infty$ norm of a convolution of a Schwartz function with a tempered distributon, which is again a tempered distribution. Yet, it is not clear to me how we can consider a $L^\infty$ norm over a tempered distribution. How does one makes sense of this ?
|
The shortest might be to proceed by duality with $L^1$ $$ \|u\|_{L^\infty} = \sup_{\varphi\in\mathcal S, \|\varphi\|_{L^1}\leq 1} \langle u,\varphi \rangle. $$ Of course, when the above quantity is finite, then $u$ will be identifiable with a $L^\infty$ function anyway.
|
|analysis|fourier-analysis|
| 1
|
Why don't integrals start at $-\infty$?
|
Is there any specific reason for which integrals start at zero? Logically when we think of an integral as a summation of area, it must start from negative infinity right? Why do indefinite integrals start at zero? Like when I plug in a value in the indefinite integral, we get the area from 0 to the plugged in value. Therefore why not assume that they start from zero? If yes, then why, intuitively? I think I'm confused with the idea of an indefinite integral. Please consider that I'm self-teaching Calculus and am sorry if I'm too silly but I suppose my doubts would be cleared if I'm made clear about the Geometrical (if it has any) and Mathematical meaning of an Indefinite Integral and why do we get finite values when substitute some values of x when we're told that they're indefinite and are of no meaning?
|
This seems to have been mostly answered in comments, which I have tried to organize here. Like when I plug in a value in the indefinite integral, we get the area from 0 to the plugged in value. If we forget that every indefinite integral includes a $+C$ term, or if we suppose that $C=0$ , we sometimes get the area from $0$ to the plugged-in value. For example: \begin{align} \int x \,\mathrm dx &= \frac12 x^2 + C, \\ \int_0^x t \,\mathrm dt &= \frac12 x^2. \\ \end{align} But sometimes we don't get that result: \begin{align} \int \sin x \,\mathrm dx &= -\cos x + C, \\ \int_0^x \sin t \,\mathrm dt &= 1 - \cos x \neq -\cos x. \end{align} In the following example, the indefinite integral actually does match the definite integral from $-\infty$ if you set $C=0$ in the usual formula: \begin{align} \int e^x \,\mathrm dx &= e^x + C, \\ \int_{-\infty}^x e^t \,\mathrm dt &= e^x. \\ \end{align} So a simple answer is that when you've seen enough different integrals, the premise of the question won'
|
|calculus|integration|
| 0
|
What is the fundamental group of $\mathbb{R}^2\setminus\mathbb{Z}^2$
|
Basically the title. As the plane minus $n$ points has as fundamental group the free product of $n$ copies of $\mathbb{Z}$ , is there something like a countable free product? In that case, is it related to the mentioned fundamental group?
|
Let's base ourselves at, say, somewhere in $(-1,1)^2\setminus\{0\}$ . By an easy compactness argument the fundamental group is isomorphic to $\varinjlim_{N\in\Bbb N}\pi_1((-N,N)^2\setminus(\{-(N-1),\cdots,N-1\})^2)$ which is a countable free product of $\Bbb Z$ . And yes, this makes sense. Arbitrary free products of arbitrary sets of groups exist and are groups with the usual universal property (and can be constructed in the same way). A countable free product of $\Bbb Z$ - the free group on one generator would just be the free group on $\Bbb N$ generators. My colimit is just the colimit of $F(1),F(2),F(3),\cdots$ where $F$ is the free group functor - a left adjoint - and left adjoints preserve colimits so abstractly I know this is equivalently just $F(\varinjlim(1\subset2\subset3\subset\cdots))=F(\Bbb N)$ . There is surely a natural map (take $\pi_1(\text{inclusions})$ ) from this group to $\pi_1(\Bbb R^2\setminus\Bbb Z^2)$ and we just need to check it injects and surjects. But any lo
|
|algebraic-topology|fundamental-groups|
| 0
|
All actions of $\mathrm{SO}(3)$ on $S^2$ up to equivalence
|
I'm trying to determine the complete set of smooth group actions of $\mathrm{SO}(3)$ on $S^2$ . That is, I'm trying to determine all smooth $\sigma: \mathrm{SO}(3) \rightarrow \mathrm{Diff}(S^2)$ . The two most obvious actions are the trivial action $\sigma_t(R) = I$ where all elements of $\mathrm{SO}(3)$ act by the identity, and the canonical action $\sigma_c(R) = R$ where $\mathrm{SO}(3)$ acts by rotations on $S^2$ . When I asked a related question , it was pointed out that if $f \in \mathrm{Diff}(S^2)$ , then $\sigma_f(R) = f \circ \sigma_c(R) \circ f^{-1}$ is generally a different group action. However, I realize that in a way that $\sigma_c$ and $\sigma_f$ are not essentially different. In particular, one can define an equivalence relation on the group actions where two actions $\sigma_1, \sigma_2$ are equivalent if there exists an $F \in \mathrm{Diff}(S^2)$ such that $$ \sigma_2(R)(F(x)) = F(\sigma_1(R)x)$$ where $R \in \mathrm{SO}(3)$ and $x \in S^2$ . Under this relation, $\sig
|
The group $G=SO(3)$ contains only few closed subgroups: zero-dimensional (finite) subgroups, 1-dimensional subgroups (the group of rotations around a line $L$ and its index 2 extension consisting of transformations preserving the line $L$ ) and the unique 3-dimensional subgroup, $SO(3)$ itself. Consider first a topological action $SO(3)\times S^2\to S^2$ of $SO(3)$ on the 2-dimensional sphere. Pick a point $x\in S^2$ . Then the $G$ -stabilizer of $x$ is a closed subgroup $G_x . We have the continuous orbit map $G\to S^2$ , $f_x: g\mapsto gx$ . Since $f_x(g'g)=f_x(g')$ for all $g\in G_x, g'\in G$ , the map $f_x$ descends to a continuous 1-1 map $G/G_x\to S^2$ , hence, a homeomorphism to its image. Let $k$ denote the dimension of the subgroup $G_x$ . Thus, the quotient space is either 3-dimensional, 2-dimensional or a single point (when $G_x=G$ ). If $G/G_x$ is 3-dimensional, we obtain a topological embedding of a 3-dimensional manifold $G/G_x$ to the 2-dimensional manifold $S^2$ , which
|
|geometry|differential-geometry|group-actions|
| 1
|
AB is a chord of length 2ka of a circle of radius a. The tangents to the circle at A and B meet in C if k^7 is negligible calculate the area of ABC
|
AB is a chord of length 2ka of a circle of radius a . The tangents to the circle at A and B meet in C . Show that, if k is so small compared with unity that $k^7$ is negligible, the area of the triangle ABC is $a^2k^3+\frac12 a^2k^5$ . This image is a rough idea of what I think is going on. ( O is the center of the circle and b is the length of line AC and X is the intersection point of line OC and chord AB . The area required can be given as $k\times a\times h$ I first noticed that getting b in terms of a and k was possible because Triangle OAC and Triangle OBC right-angled triangles so: $$\frac{1}{a^2} + \frac{1}{b^2} = \frac{1}{(ka)^2}$$ $$b = \frac{ka}{1- k^2}$$ Using binomial expansion to $k^4$ I got $$b = ka(1 + \frac{1}{2}k^2 + \frac{3k^4}{8})$$ Using this I got h as follows: $$a^2 + b^2 = \sqrt{a^2-(ka)^2} +h$$ $$h = a^2 + [ka(1 + \frac{1}{2}k^2 + \frac{3k^4}{8})]^2 -a\sqrt{1-k}$$ Expanding $\sqrt{1-k}$ $$h = a^2 + [ka(1 + \frac{1}{2}k^2 + \frac{3k^4}{8})]^2-a[1-\frac{k}{2}-\fr
|
In right triangle $OXB$ , Pythagoras theorem gives : $$OX^2+(ka)^2=a^2 \iff OX=a \sqrt{1-k^2}\tag{1}$$ In right triangle $OBC$ , let us use the "Geometric mean theorem" stating that the length of altitude $BX$ is the geometric mean of the length of segments it creates on the hypotenuse , i.e., $OX$ and $XC$ : $$OX.XC=XB^2 \iff XC=\frac{(XB)^2}{OX}=\frac{k^2a^2}{a \sqrt{1-k^2}}=\frac{k^2a}{\sqrt{1-k^2}}$$ As the area $\Delta$ of triangle $ABC$ is $XB.XC$ , it remains to expand : $$\Delta=ka.\frac{k^2a}{\sqrt{1-k^2}}=k^3a^2 (1-k^2)^{-1/2}$$ into a binomial series (as you have done) : $$\Delta=k^3a^2(1+\tfrac12 k^2+...)$$ where the first omitted term is negligible.
|
|geometry|algebra-precalculus|proof-writing|binomial-theorem|
| 1
|
Proof that $\text{dim cokernel}(g\circ f)\leq \text{dim cokernel}(f)+\text{dim cokernel}(g)$
|
I want to proof that the composition of two fredholm operators is again a fredholm operator. Let $f:H\rightarrow H'$ and $g:H'\rightarrow H''$ be two Fredholm operators, that means $f$ and $g$ are linear maps and $\text{kernel}(f),\text{cokernel}(f),\text{kernel}(g),\text{cokernel}(g)$ are all finite dimensional. I want to prove that $g\circ f$ is a fredholm operator, too. Proofing that $\text{kernel}(g\circ f)$ is finite dimensional is easy via the inequality $\text{dim kernel}(g\circ f)\leq \text{dim kernel}(g)+\text{dim kernel}(f)$ One can use the dimension formula to proof this. Now I need to proof that $\text{dim cokernel}(g\circ f)$ is also finite dimensional via the inequality $\text{dim cokernel}(g\circ f)\leq \text{dim cokernel}(f)+\text{dim cokernel}(g)$ My problem is that I don´t know how to show the last inequality. Thanks for your time :)
|
Here's a proof that $\text{coker}(f)^* \cong \ker f^*$ . Define $\Phi:\text{coker}(f)^* \to \ker f^*$ as follows: for $\alpha \in \text{coker}(f)^*$ and $x \in H$ , $$ \Phi(\alpha)(x) = \alpha(x + \text{ran}(f)) $$ To see that $\Phi$ indeed maps elements to $\ker f^*$ , note that $$ f^*(\Phi(\alpha))(x) = (\Phi(\alpha) \circ f)(x) = \Phi(\alpha)(f(x)) = \alpha(f(x) + \text{ran}(f)) = \alpha(0) = 0 $$ To see that $\Phi$ is injective, we note that $$ \Phi(\alpha) = 0 \implies\\ \alpha(x + \text{ran}(f)) = 0\quad \text{ for all }x \implies\\ \alpha = 0. $$ To see that $\Phi$ is surjective, consider any $\beta \in \ker f^*$ . We note that for all $x$ , we have $$ f^*(\beta)(x) = 0 \implies \beta(f(x)) = 0, $$ so that $\text{ran}(f) \subset \ker \beta$ . It follows that the induced map $\tilde \beta :H/\text{ran}(f) \to \Bbb C$ is well defined, and $\Phi(\tilde \beta) = \beta$ .
|
|linear-algebra|
| 0
|
Hoare Logic: If-statement
|
Can someone explain the first assignment and implied? We prove bottom to up and I don't follor after the $(1=x+1)$ if-Statement. This is what my book says about the assignment rule: , if we wish to show that ψ holds in the state after the assignment $x = E$ , we must show that $ψ[E/x]$ holds before the assignment. But how is that the case here?
|
I'm not sure what book you're using, so I'm gonna stick to the formulation on wikipedia (hope that's Ok :]). There are two things going on in the first four lines of your example, one application of the Consequence rule and one application of the Assignment axiom schema . Set $$\psi' = (x + 1 - 1 = 0 \to 1 = x + 1) \land (\neg (x + 1 - 1 = 0) \to x + 1 = x + 1)$$ $$\psi = (a - 1 = 0 \to 1 = x + 1) \land (\neg (a - 1 = 0) \to a = x + 1)$$ and notice that $$\psi' = \psi[x + 1/a].$$ So applying the Assignment axiom schema (the rule you've mentioned) we're allowed to derive $$\overline{\{\psi'\}\ a = x + 1\ \{\psi\}}$$ that being lines 2-4. The Consequence rule allows us to strengthen the precondition ( $\psi'$ ), since $\psi'$ is valid we have $$\top \to \psi'$$ $$\psi \to \psi$$ and can derive $$\frac{\top \to \psi',\ \overline{\{\psi'\}\ a = x + 1\ \{\psi\}},\ \psi \to \psi}{\{\top\}\ a = x + 1\ \{\psi\}}.$$ The remaining lines prove $\{\psi\}\ \texttt{if } ... \texttt{ else } ...\ \{y
|
|discrete-mathematics|logic|propositional-calculus|computer-science|
| 0
|
Finding the locus of the third vertex of the triangle under the given conditions
|
A triangle is circumscribed to the ellipse $\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$ and two of its vertices lie on the directrices, such that one lies on each directrix. Then, the locus of the third vertex of the triangle is? I tried using an affine transform to simplify the situation and began working with $x^2+y^2=1$ , with one vertex lying on $x=1/e$ and the other on $x=-1/e$ . Then I tried various approaches. First, I tried writing the equations to the tangents and computing the third vertex, but it became extremely calculative and unpalatable. So, I switched to a different approach, by finding the pole with respect to the lines $x=1/e$ and $x=-1/e$ , which came out to be $(e,0)$ and $(-e,0)$ respectively. Then I tried to find some useful relations for chords passing through these two points, but all I could come up with was that the products of the two chord segments for a chord passing through either pole is constant (equal to $1-e^2$ , a trivial result). I am not able to figure out h
|
First thing to do is to scale the ellipse, and its directrix lines by a factor of $(1/a)$ horizontally, and $(1/b)$ vertically. This will turn the ellipse into a unit circle, and will place the lines at $x = \pm \dfrac{1}{e} $ . In this transformed figure, if you have a tangent line whose direction vector is $ u = (\cos \theta , \sin \theta) $ , then its normal vector will be $ n = (-\sin \theta, \cos \theta) $ . Tangency will occur at $ r_1 = n $ , and the equation of the tangent is $ p(t) = n + t u = (- \sin \theta + t \cos \theta, \cos \theta + t \sin \theta) $ Vertices $A$ and $B$ as shown in the figure below, are determined by setting $ x = \dfrac{1}{e} $ and $ x = - \dfrac{1}{e} $ . This gives $ A = ( \dfrac{1}{e} , \dfrac{1}{\cos \theta} (1 + \sin \theta /e ) )$ $ B = ( - \dfrac{1}{e} , \dfrac{1}{\cos \theta} (1 - \sin \theta / e )) $ And we have the distances $c_1$ between $r_1$ and $A$ , and the distance $c_2$ between $r_1$ and $B$ given by $ c_1 = \dfrac{1}{\cos \theta} ( \df
|
|analytic-geometry|
| 1
|
Deriving the hypothesis function that minimizes a regression loss function
|
In this snip taken below from Bishop's PRML book, I'm having trouble following the derivation that allows us to conclude that the minimizing hypothesis function, labeled y(t) here, can be shown to be equal to $\mathbb{E}[$ t|x]. Perhaps my issues is 1) lack of knowledge of the calculus of variations that allows for removing one of the integrals by taking a partial derivative with respect to y(x) in (1.88). And secondly, I'm not following what "sum of product rules of probability" are being used to derive (1.89). Could anyone walk me though these last 2 lines in a more detailed step by step fashion?
|
$\def\qty#1{\left(#1\right)}$ This is much easier to understand if you avoid the drama of the calculus of variations. The expression in (1.87) is equivalent to, $$ \mathbb{E}[L]=\mathbb{E}_{x,t}\left[\qty{y(x)-t}^2\right] $$ where $\mathbb{E}_{x,t}$ is the expectation w.r.t. the joint distribution of the random variables $x$ and $t$ . Now lets fix $x$ and write, $$ \mathbb{E}[L|x]=\mathbb{E}_{t}\left[\qty{y(x)-t}^2\right | x] $$ Because we have fixed $x$ , $y(x)$ can be treated as a variable. This means that we can take the partial derivative w.r.t. $y(x)$ , $$ \frac{\partial \mathbb{E}[L|x] }{\partial y(x)} = 2\mathbb{E}_{t}\left[\qty{y(x)-t}\right | x] $$ Set the r.h.s. to zero; the solution is, $$ y^*(x)=\mathbb{E}_{t}\left[t | x\right] $$ i.e. given $x$ the optimal value for $y$ is equal to the expectation of $t$ w.r.t. the conditional distribution $p(t|x)$ which is (1.89). So for each value of $x$ we have an optimal value for $y(x)$ ; since, $$ \mathbb{E}[L]=\mathbb{E}_{x}\left[\m
|
|probability|multivariable-calculus|machine-learning|
| 0
|
Prove the inequality $(1+x^2)(1+y^2)(1+z^2) \ge 64$ if $xy + yz + zx = 9$
|
Given non-negative reals $x, y, z$ satisfying $xy + yz + zx = 9$ , we're supposed to prove the inequality $(1+x^2)(1+y^2)(1+z^2) \ge 64$ . My first approach was something like C-S or Hölder, since we're multiplying sums together. Another approach was to realize that $(1+x^2)(1+y^2) \ge (x+y)^2$ , so it would suffice to prove $(x+y)^2(z^2+1) \ge 64$ , but from here I couldn't make much progress. An "obvious" observation I made was $x^2 + y^2 + z^2 \ge xy + yz + zx \implies (x+y+z)^2 \ge 27$ , but again I am stuck. Any help would be much appreciated.
|
Here is the $pqr$ -technique based proof. First, we homogenize the inequality: $$\prod(1+x^2)\ge64$$ $$\prod(9+9x^2)\ge64\cdot9^3$$ $$\prod(9x^2+xy+yz+zx)\ge64(xy+yz+zx)^3.$$ After expanding we get: $$\sum\left( x^4y^2+x^2y^4+2x^4yz+2x^3y^3\right)+42x^2y^2z^2\ge 10\sum \left(x^3y^2z+x^3yz^2\right).$$ Let $p=x+y+z$ and $r=xyz$ ( $q$ is equal to $9$ ). Then the inequality turns into: $$81p^2-2\cdot9^3-2p^3r+36pr-3r^2+2p^3r-54pr+6r^2+\\+2\cdot9^3-54pr+6r^2+42r^2\ge 90pr-30r^2.$$ After simplifying we get: $$81p^2+81r^2\ge162pr.$$ The last inequality is obviously true.
|
|inequality|contest-math|cauchy-schwarz-inequality|
| 0
|
We have $n$ balls and we randomly allocate them to $n$ buckets with equal probability. What is the probability that exactly one bucket remains empty?
|
I have a question with my reasoning (or lack thereof) -- I understand what the correct answer is, but I do not see the gap in my logic. Premise: We have $n$ balls and we randomly allocate them to $n$ buckets with equal probability. What is the probability that exactly one bucket remains empty? My Attempt: Let us start by determining the probability of allocating $n-1$ balls among $n$ buckets such that there is exactly one empty bucket. As $n\over{n}$ of the buckets are initially empty, the first ball is always placed in an empty bucket (i.e. probability of $n\over{n}$ ). After placing the first ball, ${n-1}$ of $n$ buckets are empty, so the probability of placing the 2nd ball into an empty bucket is ${n-1}\over n$ . This continues on for the $n-1$ balls as follows: $$\text{P(Exactly One Bucket Empty)}= { n\over n} \times { n-1\over n} \times { n-2\over n} \times ... \times { 2\over n} = {n! \over n^{n-1}}$$ Now we can multiply that probability by the probability that we allocate the fi
|
We have $n$ balls, $n$ buckets With $1$ empty bucket, we have $^nP_{n - 1}$ arrangements of the $n$ balls. Since any of the $n$ buckets can be empty, we have $n \times ^nP_{n - 1}$ permutations. The extra/double ball can be placed on any one of the $n - 1$ balls, giving us a total of $n(n - 1) \times ^nP_{n - 1}$ permutations. If all buckets are filled, we have $^nP_n = n!$ permutations. The sample space = $n(n - 1) \times ^nP_{n - 1} + n!$ $^nP_{n - 1} = n!$ The sample space = $n(n - 1) \times n! + n!$ The event space = $n(n - 1) \times n!$ The probability we're interested in = $P(E)$ $P(E) = \frac{n(n - 1) \times n!}{n(n - 1) \times n! + n!}$ $P(E) = \frac{n(n - 1)}{n(n - 1) + 1}$ With $2$ balls & $2$ buckets ... Both buckets filled = 2 permutation 1 bucket filled = 4 permutations $P(1 \text{empty bucket}) = \frac{4}{6} = \frac{2}{3} = \frac{2(2 - 1)}{2(2 - 1) + 1}$ . Checks out!!
|
|probability|combinatorics|
| 0
|
Taylor series third order approximation
|
There has been this question that had been bothering me for a while and I could not find a satisfying answer on the internet or any of the books even though it is not very complex. i) Its because if I have to find a third order polynomium approximation using taylor series for a 2 variable function, then is it correct to write that the third term will look something like this, $$ ... + \frac{1}{3!}[f_{xxx}(x_0,y_0)(x-a)^3 + 6f_{xxy}(x_0,y_0)(x-a)(y-b)+f_{yyy}(x_0,y_0)(y-b)^3] + .... $$ I was a bit unsure about the middle part. ii) About the hessian matrix, how would I write a hessian matrix if I have to make one for a third order like the one above. I know that for second order it looks like, $$H_f(x,y) = \left(\begin{array}{cccc} f_{xx} & f_{xy} \\ f_{yx} & f_{yy} \end{array}\right)$$ Thank You :)
|
The third order term (it can be genralized to any order term) of the multivariable Taylor series of $f(\mathbf x):\mathbf R^n\to\mathbf R$ around $\mathbf a=(a,b,...)$ is $$\begin{aligned} \frac{1}{3!}((\mathbf x-\mathbf a)\cdot\boldsymbol\nabla|_{\mathbf a})^3f(\mathbf x)&\overset{\text{in }\mathbf R^2}{=}\frac{1}{3!}((x-a)\partial_x|_{\mathbf a}+(y-b)\partial_y|_{\mathbf a})^3f(x,y)\\ &\ \ =\frac{1}{3!}((x-a)^3\partial_{xxx}|_{\mathbf a}+3(x-a)^2(y-b)\partial_{xxy}|_{\mathbf a}\\ &\hspace{1.5cm}+3(x-a)(y-b)^2\partial_{xyy}|_{\mathbf a}+(y-b)^3\partial_{yyy}|_{\mathbf a})f(x,y)\\ &\ \ = \frac{1}{3!}[(x-a)^3f_{xxx}(a,b)+3(x-a)^2(y-b)f_{xxy}(a,b)\\ &\hspace{1.5cm}+3(x-a)(y-b)^2f_{xyy}(a,b)+(y-b)^3f_{yyy}(a,b)] \end{aligned} $$ or in matrix language, though in your case it is a $2\times 2\times 2$ tensor, I found it could be expressed as (example in $\mathbf R^2$ ) $$ \frac{1}{3!}(x-a,y-b)\begin{pmatrix} (\mathbf{x-a})\cdot\boldsymbol\nabla f_{xx}(\mathbf a) & (\mathbf{x-a})\cdot\boldsym
|
|multivariable-calculus|taylor-expansion|hessian-matrix|
| 0
|
Why doesn't $RCA_0$ prove $\Sigma^0_1$-comprehension?
|
Answer: because that's $ACA_0$ , alright, but : Friedman et al.'s 1983 "Countable algebra and set existence axioms" has [verbatim, including old terminology and dubious notation]: Lemma 1.6 ( $RCA_0$ ) (Bounded $\Sigma^0_1$ -separation) If $\varphi$ is $\Sigma^0_1$ , then for each $n$ there is an $X$ such that $\forall x and a few pages later We now pause to introduce within $RCA_0$ two construction principles which will be useful in several places including the proof of the next Theorem. (1) Suppose $F_0, F_1,...$ is a sequence of countable fields and $\Pi_n: F_n \rightarrow F_{n+1}$ is a sequence of monomorphisms. Then the direct limit (i.e. the union) of this system exists; $lim_{\rightarrow}F_n$ is the set of all pairs $(n, x)$ where $x \in F_n$ modulo the equivalence $(n, x) \equiv (m, y) \leftrightarrow n (or vice versa) $\land \ y = \Pi^{0...0}_{m-1}\Pi^0_{n+1}\Pi_n(x)$ and this appears to make it possible more generally to use bounded $\Sigma^0_1$ -comprehension to form an 'inc
|
This is because the increasing sequence of sets may not exist, that is, $X_1,X_2,\dots$ exists but the sequence as a whole is not a set. Informally we may think that the sets in a model of $RCA_0$ are "computable" (this is almost never the case) and so there is some program that computes $X_i$ for each $i\in\mathbb{N}$ . But it might not be possible to have one program that computes all of the $X_i$ , that is a program that accepts input $(n,i)$ if $n\in X_i$ and rejects input $(n,i)$ if $n\notin X_i$ . If the sequence $(X_i)_{i\in\mathbb{N}}$ exists, in your case, then $\bigcup_{i\in\mathbb{N} }X_i=X$ also will exist. This is because I can verify if $n\in X$ by checking if $n\in X_n$ . This is even more clear when you restrict yourself to standard models of second order arithmetic (in the literature these are called $\omega$ -models). Any finite set $F=\{x_0,\dots,x_{n-1}\}$ in a standard model is computable by a program that at input $y$ checks if $y=x_0$ or $y=x_1$ , $\dots$ , or $y
|
|logic|proof-theory|peano-axioms|meta-math|reverse-math|
| 0
|
Matrices preserving the sup-norm on $\mathbb{Q}_p^n$
|
Consider the $p$ -adic field $\mathbb{Q}_p$ and $n\geq 1$ . The absolute value is $|x|=p^{-v_p(x)}$ for $x\in\mathbb{Q}_p$ , where $v_p$ is the $p$ -adic valuation. Endow $V=\mathbb{Q}_p^n$ with the sup-norm: For $v\in V$ , $\|v\|=\max_{i=1}^{n} |v_i|$ . Let $A\in M_{n\times n}(\mathbb{Z}_p)$ . Does the condition $v_p(\det A)=0$ imply that $\|Av\|=\|v\|$ for all $v\in V$ ? Is $v_p(\det A)=0$ also a necessary condition? This is trivially true in the case $n=1$ , and maybe I'm grossly generalizing, but I think this has a change of being true.
|
Yes, and more: The subgroup of those elements of $GL_n(K)$ which leave the sup-norm invariant is exactly $GL_n(\mathcal{O}_K)$ , for any local field $K$ with its ring of integers $\mathcal{O}_K$ . Cf. reuns' answer to Natural Extensions of the $p$-Adic Norm to Higher Dimensions . (And for any commutative ring $R$ , $GL_n(R) =\{g\in M_n(R): \mathrm{det}(g) \in R^\times\}$ . Cf. also Compact subgroups of $p$-adic fields and the groups $GL_n$ over them .)
|
|number-theory|p-adic-number-theory|local-field|
| 1
|
6 conics through 3 points and tangent to a line
|
I played in GeoGebra and discovered an interesting fact: Let $L$ be a line. Let $A_1,A_2,A_3$ be points that are not collinear and not on $L$ . Let $C_1,\dots,C_6$ be 6 conics that pass through points $A_1, A_2, A_3$ and are tangent to $L$ . Let $C_1,C_2$ intersect at a fourth point $Q_1$ other than $A_1,A_2,A_3$ . Let $C_2,C_3$ intersect at a fourth point $Q_2$ other than $A_1,A_2,A_3$ . Let $C_3,C_4$ intersect at a fourth point $Q_3$ other than $A_1,A_2,A_3$ . Let $C_4,C_5$ intersect at a fourth point $Q_4$ other than $A_1,A_2,A_3$ . Let $C_5,C_6$ intersect at a fourth point $Q_5$ other than $A_1,A_2,A_3$ . Let $C_6,C_1$ intersect at a fourth point $Q_6$ other than $A_1,A_2,A_3$ . Let $D_1$ be the conic through 5 points $Q_1, Q_4, A_1, A_2, A_3$ . Let $D_2$ be the conic through 5 points $Q_2, Q_5, A_1, A_2, A_3$ . Let $D_3$ be the conic through 5 points $Q_3, Q_6, A_1, A_2, A_3$ . Then $D_1, D_2, D_3$ pass through a fourth point $E$ other than $A_1, A_2, A_3$ . (The black conics are
|
This answer explains why the family of conics passing through the points $A_1,A_2,A_3$ and tangent to a line L constitute a conic in $P$ . It uses quadratic forms and lends itself to explicit computations. (I used a combination of Mathematica and Geogebra to develop and visualize this proof). In what follows, we will discuss various families of conics incident with a given set of points and tangents. We'll use the shorthand $mP/nT$ to designate a set of $m$ points and $n$ tangents. 1. Background The conic $C$ defined by the equation $ ax^2+cy^2+2bxy+2dx+2ey+f=0 $ can be expressed as a quadratic form $p^TAp=0,$ where point $p=\begin{bmatrix} x \\ y \\ 1 \\ \end{bmatrix} $ and $A= \begin{bmatrix} a & b & d \\ b & c & e \\ d & e & f \\ \end{bmatrix} $ is a symmetric matrix. For a point $p$ , $Ap$ is the polar of $p$ with respect to $C$ (See Wikipedia: Pole and Polar ). If $p$ is on $C$ , then $Ap$ is the tangent to $C$ at $p$ . Dualizing from points to lines, the tangential equation of th
|
|conic-sections|projective-geometry|
| 0
|
Heat Equation in Cylindrical Coordinates w/ Separation of Variables
|
I am attempting to solve the 1D Heat Equation in cylindrical coordinates using separation of variables. Overall, I feel like I understand everything perfectly up until the part where I have to apply boundary conditions to find the values of my integration constants. I was hoping someone could clarify how to do this in more detail, and where to go from here. The 1D Heat Equation is defined as: $$\frac{dT}{dt} = \frac{k}{\rho C_p}(\frac{1}{r}\frac{dT}{dr}+\frac{d^2T}{dr^2}).$$ Let $$A =\frac{k}{\rho C_p}.$$ Using separation of variables, I managed to work out the following equations: $$T(r,t) = X(r)\,Y(t)$$ $$\frac{Y'(t)}{Y(t)} + A\,m^2 = 0 $$ $$rX'(r)+r^2X''(r) + r^2m^2X(r) =0$$ These equations appear to give fairly straightforward solutions of: $$Y(t) = C_1e^{-Am^2t} $$ $$X(r) = C_2J_0(mr)+C_3Y_0(mr)$$ I understand everything up to here. However, I am lost as to how to properly apply the following boundary conditions to find the values of both the constants and $m$ : $$T(r,0) = Ti$$ $$
|
Firstly, to do separation of variables, you need to have homogeneous boundary conditions. You can get this by defining a new variable $$ u(r,t) = T(r,t) - T_\text{surface} $$ So that $u(r,t)$ will satisfy the same PDE, but with condition $u(R,t) = 0$ , and initial condition $u(r,0) = T_i - T_\text{surface}$ . Now you apply separation of variables by assuming $u_n(r,t) = X_n(r)Y_n(t)$ (I subscript with an $n$ here to indicate we're going to find an infinite number of separable solutions, then argue any linear combination also satisfies the given PDE and boundary conditions, which is the 'adding a summation' bit). You can satisfy the boundary condition $u(R,t) = 0$ by requiring $X_n(R) = 0$ (and the inner boundary condition, which is really required to make sure the temperature remains bounded, with $X_n'(0) = 0$ . Solving the problem for $X$ requires a bit of knowledge about Bessel functions. The condition at $r=0$ says that $c_2=0$ , as $Y_0$ has a logarithmic singularity at zero. The
|
|partial-differential-equations|bessel-functions|
| 1
|
Behaviour of polylogarithm at $|z|=1$
|
I have the sum $$ \sum_{n=1}^\infty \dfrac{\cos (n \theta)}{n^5} = \dfrac{\text{Li}_5 (e^{i\theta}) + \text{Li}_5 (e^{-i\theta})}{2}, $$ where $0\leq\theta is an angle and $\text{Li}_5(z)$ is a polylogarithm function. I want to know the dominant behaviour of my function. I mostly care for the regime $\theta \ll 1$ . Could someone please help me find some information on this? I've mostly been only able to find behavours for $z \approx 0$ and for $|z| , which are not my case!
|
It is known that $$ \operatorname{Li}_5 ({\rm e}^\mu ) = - \frac{1}{24}\mu ^4\log ( - \mu ) + \frac{{25}}{{288}}\mu ^4 + \sum\limits_{k = 0,k \ne 4}^\infty {\frac{{\zeta (5 - k)}}{{k!}}\mu ^k } $$ for $\left| \mu \right| , $\mu\neq 0$ . Thus, \begin{align*} \sum\limits_{n = 1}^\infty {\frac{{\cos (n\theta )}}{{n^5 }}} = \operatorname{Re}(\operatorname{Li}_5 ({\rm e}^{ - {\rm i}\theta } )) = - \frac{1}{24}\theta ^4\log \theta & + \zeta (5) - \frac{{\zeta (3)}}{2}\theta ^2 + \frac{{25}}{{288}}\theta ^4 \\ & + \sum\limits_{k = 3}^\infty {( - 1)^k \frac{{\zeta (5 - 2k)}}{{(2k)!}}\theta ^{2k} } \end{align*} for $0 .
|
|sequences-and-series|asymptotics|polylogarithm|
| 1
|
Why don't integrals start at $-\infty$?
|
Is there any specific reason for which integrals start at zero? Logically when we think of an integral as a summation of area, it must start from negative infinity right? Why do indefinite integrals start at zero? Like when I plug in a value in the indefinite integral, we get the area from 0 to the plugged in value. Therefore why not assume that they start from zero? If yes, then why, intuitively? I think I'm confused with the idea of an indefinite integral. Please consider that I'm self-teaching Calculus and am sorry if I'm too silly but I suppose my doubts would be cleared if I'm made clear about the Geometrical (if it has any) and Mathematical meaning of an Indefinite Integral and why do we get finite values when substitute some values of x when we're told that they're indefinite and are of no meaning?
|
I suspect that you have come to a conclusion about indefinite integrals by extrapolating from a sample of observations that was too small. There's another fundamental issue here: when you learned about the indefinite integral of a function, what were you told it is? An indefinite integral is not a summation of area. There's nothing wrong with using a summation of area (with appropriate modifications for such things as functions with negative values) to explain definite integrals. There's at least one calculus textbook that defines the definite integral this way even before defining differentiation. For the indefinite integral, however, a better basis for a definition is: What function do we differentiate to get the function $f(x)$ ? There's no area summation involved here, only differentiation. For example, you might find the following solution of an indefinite integral in a textbook: $$ \int \frac x{\sqrt{x^2 + 1}}\, \mathrm dx = \sqrt{x^2 + 1} + C. $$ If you set $C=0$ , which is what
|
|calculus|integration|
| 0
|
Expected value of the square of a stopping time
|
Problem Let $a>0$ and $B$ be a standard $\mathbb{R}$ -valued Brownian motion. Define the stopping time $S_a:=\inf\{t\geq 0\ \vert \left\lvert B_t\right\rvert = a\}$ . Compute $\mathbb{E}\left[S_a^2\right]$ . Flawed attempt at showing that $\mathbb{E}\left[S_a\right] = a^2$ I am following the solution for a similiar problem by considering the martingale $(B_t^2 - t)_{t\geq 0}$ . Applying Doob's optional sampling theorem to this martingale yields: $$\mathbb{E}\left[B_{S_a}^2-S_a\right] = \mathbb{E}\left[B_0^2-0\right] = 0\quad \text{(a.s.)}$$ Because of symmetry we have: \begin{gather*} x := \mathbb{E}\left[S_a 1_{\left\{B_{S_a}=a\right\}}\right] = \mathbb{E}\left[S_a 1_{\left\{B_{S_a}=-a\right\}}\right]\quad \text{(a.s.)}\\ \mathbb{E}\left[S_a\right] = \mathbb{E}\left[S_a 1_{\left\{B_{S_a}=a\right\}}\right]+\mathbb{E}\left[S_a 1_{\left\{B_{S_a}=-a\right\}}\right] = 2x\quad \text{(a.s.)} \end{gather*} $x$ can now be computed through the first equation: \begin{align*} 0 &= \mathbb{E}\left
|
$\def\={\mathrel{\phantom=}}$ Your calculation writes \begin{gather*} E( (a^2 - S_a) I_{\{ B_{S_a} = a \}} ) + E( ((-a)^2 - S_a) I_{\{ B_{S_a} = -a \}} )\\ = a^2 - E( S_a I_{\{ B_{S_a} = a \}} ) + (-a)^2 - E( S_a I_{\{ B_{S_a} = -a \}} ), \end{gather*} which boils down to $$ E( a^2 I_{\{ B_{S_a} = a \}} ) + E( (-a)^2 I_{\{ B_{S_a} = -a \}} ) = 2a^2, $$ but \begin{align*} &\= E( a^2 I_{\{ B_{S_a} = a \}} ) + E( (-a)^2 I_{\{ B_{S_a} = -a \}} )\\ &= a^2 ( E( I_{\{ B_{S_a} = a \}} ) + E( I_{\{ B_{S_a} = -a \}} ) )\\ &= a^2 E( I_{\{ B_{S_a} = a \}} + I_{\{ B_{S_a} = -a \}} )\\ &= a^2 E( I_{ |B_{S_a}| = |a| } )\\ &= a^2, \end{align*} where the last equality uses the fact that $S_a almost surely. The mistake made in the original calculation is supposedly that \begin{align*} &\= E( a^2 I_{\{ B_{S_a} = a \}} ) + E( (-a)^2 I_{\{ B_{S_a} = -a \}} )\\ &= a^2 E( I_{\{ B_{S_a} = a \}} ) + a^2 E( I_{\{ B_{S_a} = -a \}} )\\ &\mathrel{\color{red}{=}} a^2 \cdot \color{red}{1} + a^2 \cdot \color{red}{1}\
|
|stochastic-processes|expected-value|brownian-motion|martingales|stopping-times|
| 1
|
Can you explain to me why this proof by induction is not flawed? (Domain is graph theory, but that is secondary)
|
Background I am following this MIT OCW course on mathematics for computer science. In one of the recitations they come to the below result: Official solution Task: A planar graph is a graph that can be drawn without any edges crossing. Also, any planar graph has a node of degree at most 5. Now, prove by induction that any planar graph can be colored in at most 6 colors. Solution.: We prove by induction. First, let n be the number of nodes in the graph. Then define P (n) = Any planar graph with n nodes is 6-colorable. Base case, P (1): Every graph with n = 1 vertex is 6-colorable. Clearly true since it’s actually 1-colorable. Inductive step: P (n) → P (n + 1): Take a planar graph G with n + 1 nodes. Then take a node v with degree at most 5 (which we know exists because we know any planar graph has a node of degree ≤ 5), and remove it. We know that the induced subgraph G’ formed in this way has n nodes, so by our inductive hypothesis, G’ is 6-colorable. But v is adjacent to at most 5 oth
|
In the pseudocode you provided, a proof checker would complain about the second return statement: specialNode = findSomeDegree5orFewerNode(graph) subgraph = graph.drop(specialNode) return is6Colorable(subgraph) ^^^^^^^^^^^^^^^^^^^^^^ error: cannot deduce why is6Colorable(subgraph) implies is6Colorable(graph) Instead, you have to supply an explicit proof in order to convince the proof checker: specialNode = findSomeDegree5orFewerNode(graph) subgraph = graph.drop(specialNode) subgraphColoring = is6Colorable(subgraph) # induction hypothesis: assume is6Colorable for all graphs of size (graph.size - 1) unusedColor = findUnusedColor(subgraphColoring, specialNode) return appendColoring(subcoloring, unusedColor) Note that a proof that a graph is colorable is equivalent to having an algorithm to construct a valid coloring of that graph. Hence, appendColoring can be interpreted both as (a) an algorithm to append a node with a (locally) unused color to an existing coloring and (a) a proof that a
|
|graph-theory|proof-writing|proof-explanation|induction|planar-graphs|
| 0
|
Find Gain K and Time constant K of a system from the time response
|
There is a given system $\frac{K}{sT + 1}$ of order 1. The responses are in the image below and the 2 inputs are $u1(t) = 1(t)$ and $u_2(t) = \sqrt{2} \cdot \sin(\omega_2 t)$ . How can I find the K and T of the system. As far I know I have calculated the frequency by dividing the cycles by time and then using the w = 2pif to get the frequency of system. Gain would be 2 as for unit step the final value is 2. I am looking for an alternative easy solution which can be applied from frequency response such as calculating the w and T from gain k = 2, input signal and transfer function.
|
The transfer function $H(s)$ is: $$H(s)=\frac{K}{sT+1} \qquad (1)$$ If the system input is $U(s)$ , then the output $Y(s)$ is: $$Y(s) = U(s)H(s) $$ The response to the input $u_1(t)= u_{-1}(t)$ , where $u_{-1}(t)$ represents the step function, is: $$Y(s) = \frac{1}{s}\frac{K}{sT+1} \qquad (2)$$ The Final Value Theorem is: $$ \lim_{t \to \infty} y(t)= y(\infty)=\lim_{s \to 0}sY(s) $$ Through first graph $y(\infty)=2$ . Then, applying the above theorem to $(2)$ : $$K=2$$ Considering the frequency response ( $s= j\omega$ ), expression $(1)$ becomes: $$H(\omega)=\frac{2}{j \omega T+1}$$ With corresponding magnitude: $$ \left | H(\omega) \right |= \frac{2}{\sqrt{1+(\omega T)^2}} \qquad (3)$$ As the frequency of the output for a sinusoidal input is not changed for a linear time invariant system and, if $u_2(t)=\sqrt{2}\sin(\omega_2 t)$ , then through the second graph is possible to get: $$\omega_2=\frac{2\pi}{5}=0.4\pi \space \mathrm{rad/s}$$ Thus: $$u_2(t)=\sqrt{2}\sin(0.4\pi t)$$ Still con
|
|control-theory|optimal-control|linear-control|discrete-time|steady-state|
| 0
|
$n$-tuples as functions - why do they need to be surjective?
|
The idea of using n-tuples to represent functions is introduced in Terence Tao's Analysis I ex 3.5.2: Suppose we define an ordered $n$ -tuple to be a surjective function $x > : \{i \in N : 1 \leq i \leq n\} \to X$ whose codomain is some arbitrary set $X$ (so different ordered n-tuples are allowed to have different ranges); we then write $x_i$ for $x(i)$ and also write $x$ as $(x_i)_{1 \leq i \leq n}$ . Using this definition, verify that we have $(x_i)_{1 \leq i \leq n} = (y_i)_{1 \leq i \leq n}$ if and only if $x_i = y_i$ for all $1 \leq i \leq n$ . Question: Why does the function need to be surjective ? My Thoughts Initially I thought this was to ensure the range of the function was fully accounted for by the n-tuple. However, Tao is quite precise in using range and codomain precisely, so I don't think this is an avenue worth pursuing. Then I thought about the traditional textbook approach of considering the definition of surjective - that every element of the codomain has an element
|
Without having the book at hand, I am slightly speculating. However, as stated, this construction defines $n$ -tuples as functions. So the equality $(x_i) = (y_i)$ is an equality of functions. More often than not, equality of two functions requires three ingredients: (1) equality of domains, (2) equality of codomains, and (3) equality of the values of the two functions at each input from their common domain. Without surjectivity, the $\Leftarrow$ -direction of the claim would no longer be true, as $x_i = y_i$ for all $i$ explicitly only contains the third ingredient of equality of values. Implicitly, the first ingredient of common domain is implied by the shared subindex notation $1 \leq i \leq n$ , yet equality of codomains for the two $n$ -tuples would remain open. In contemporary set theory, it is also common to simply define a function to stand for what has traditionally been called its graph. One can deduce from a graph what the domain and range look like, where range is strictly
|
|functions|elementary-set-theory|
| 0
|
Differentiability of a Dirichlet Function Modified with $x^2$
|
I am quite stumped on a homework question for my real analysis course. The question is as follows: Prove that $g(x)=\begin{cases} x^2 & x\in \mathbb{Q} \\ 0 & x\not\in \mathbb{Q} \\ \end{cases}$ is differentiable at $c = 0$ . Here is my work so far: The derivative of $g(x)$ at $c=0$ exists if $\lim_{x\to0^{-}} \frac {g(x)-g(0)} {x-0} = \lim_{x\to0^{+}} \frac {g(x)-g(0)} {x-0}$ . By $0\in \mathbb{Q}$ $$ \lim_{x\to0^{-}} \frac {g(x)-g(0)} {x-0} = \lim_{x\to0^{-}} \frac {g(x)} {x} = \frac {\lim_{x\to0^-}g(x) } {\lim_{x\to0^{-}}x} $$ Here is where I am confused. I don't see how to deal with the indeterminant form above. We are not allowed to use L'Hopital's rule, as we only proved it last lecture and it was not in the material for this section. I imagine we would use the density of $\mathbb{R}$ and the sequence convergence criterion, as we did with our original section on pathological functions, but I'm not even sure how to start. Any help would be hugely appreciated! Thanks!
|
$$0 $$\lim_{x \to 0} 0 \leq \lim_{x \to 0}\frac{|g(x)|}{|x|} \leq \lim_{x \to 0} |x| \implies \lim_{x \to 0} |\frac{g(x)}{x}| = 0 \iff \lim_{x \to 0} \frac{g(x)}{x} = 0$$
|
|real-analysis|limits|derivatives|continuity|
| 0
|
Differentiability of a Dirichlet Function Modified with $x^2$
|
I am quite stumped on a homework question for my real analysis course. The question is as follows: Prove that $g(x)=\begin{cases} x^2 & x\in \mathbb{Q} \\ 0 & x\not\in \mathbb{Q} \\ \end{cases}$ is differentiable at $c = 0$ . Here is my work so far: The derivative of $g(x)$ at $c=0$ exists if $\lim_{x\to0^{-}} \frac {g(x)-g(0)} {x-0} = \lim_{x\to0^{+}} \frac {g(x)-g(0)} {x-0}$ . By $0\in \mathbb{Q}$ $$ \lim_{x\to0^{-}} \frac {g(x)-g(0)} {x-0} = \lim_{x\to0^{-}} \frac {g(x)} {x} = \frac {\lim_{x\to0^-}g(x) } {\lim_{x\to0^{-}}x} $$ Here is where I am confused. I don't see how to deal with the indeterminant form above. We are not allowed to use L'Hopital's rule, as we only proved it last lecture and it was not in the material for this section. I imagine we would use the density of $\mathbb{R}$ and the sequence convergence criterion, as we did with our original section on pathological functions, but I'm not even sure how to start. Any help would be hugely appreciated! Thanks!
|
Let me give a proof by definition, but with a new definition of differentiability other than yours. $f:(a,b)\to\mathbb{R}$ is differentiable at $c\in(a,b)$ if the limit $\displaystyle\lim_{x\to c}\dfrac{f(x)-f(c)}{x-c}$ exists. This somehow is equivalent to your definition. Now, as you mentioned, the limit of the difference quotient is $$\lim_{x\to0}\dfrac{g(x)}{x}$$ So for any $\epsilon>0$ , we can pick $\delta=\epsilon$ such that for any $x$ with $|x| , if $x\in\mathbb{Q}$ , then we have $$\left|\dfrac{g(x)}{x}\right|=\left|\dfrac{x^2}{x}\right|=|x| If $x\in\mathbb{I}:=\mathbb{R}\setminus\mathbb{Q}$ , then we have $$\left|\dfrac{g(x)}{x}\right|=0 So the limit exists and thus we get $g$ is differentiable.
|
|real-analysis|limits|derivatives|continuity|
| 1
|
Prove that if $\{a_{k}\}$ is a sequence of real numbers such that $\sum_{k=1}^{\infty} \frac{|a_{k}|}{k} = \infty$,
|
Prove that if $\{a_{k}\}$ is a sequence of real numbers such that $$\sum_{k=1}^{\infty} \frac{|a_{k}|}{k} = \infty$$ and $$\sum_{n=1}^{\infty} \left( \sum_{k=2^n-1}^{2^n-1} k(a_k - a_{k+1})^2 \right)^{1/2} then $$\int_{0}^{\pi} \left| \sum_{k=1}^{\infty} a_k \sin(kx) \right| \,dx = \infty.$$ My idea to prove Although the condition $$\lim_{k \rightarrow \infty} a_k=0$$ does not appear in the statement of the problem, by the well-known Cantor-Lebesgue theorem, this follows from the fact that the series $$\sum_{k=1}^{\infty} a_k \sin k x$$ is convergent almost everywhere. We note that for the application of this theorem, it would be sufficient if the series (2) were convergent on a set of positive measure. To make reference easier, we list the remaining conditions: $$\sum_{k=1}^{\infty} \frac{\left|a_k\right|}{k}=\infty$$ $$\sum_{n=1}^{\infty}\left(\sum_{k=2^{n-1}}^{2^n-1} k\left|\Delta a_k\right|^2\right)^{1 / 2} where $$\Delta a_k:=a_k-a_{k+1} \quad(k=1,2, \ldots) .$$ From (4) it follow
|
I think the following is a viable approach to this problem. Set $$f(x) = 2 \sum_{k = 1}^\infty a_k \sin(kx).$$ My idea is to consider the following "model function" $g(x)$ . For each positive integer $k$ , define $$g(x) = \frac{a_{2^k} - a_1 + a_1 \cos(x/2)}{\sin(x / 2)}, \forall |x| \in [2^{-k-1}, 2^{-k}]$$ and $0$ when $|x| > 1/2$ . The crucial lemma is that $g$ approximates $f$ quite well in the $L^1$ -sense. This idea is sort of reflected in what the OP wrote, but it is hard to implement in practice. Lemma : We have $$\int_{-1/2}^{1/2} |f(x) - g(x)| dx Proof : Note that $$\sin(x/2) f(x) = \sum_{i = 1}^\infty a_i (\cos((i - 1/2) x) - \cos((i + 1/2) x)) = a_1 \cos(x / 2) + \sum_{i = 1}^\infty (a_{i + 1} - a_i) \cos((i + 1/2) x).$$ On the other hand, when $|x| \in [2^{-k-1}, 2^{-k}]$ , we have $$\sin(x / 2) g(x) = a_0 \cos(x / 2) + \sum_{i = 1}^{2^k - 1} (a_{i + 1} - a_i).$$ So we have $$\sin(x/2) (f(x) - g(x)) = \sum_{i = 1}^{2^k - 1} (a_{i + 1} - a_i)(\cos((i + 1/2) x) - 1) + \sum_{
|
|real-analysis|convergence-divergence|fourier-series|bounded-variation|
| 1
|
Prove the inequality $(1+x^2)(1+y^2)(1+z^2) \ge 64$ if $xy + yz + zx = 9$
|
Given non-negative reals $x, y, z$ satisfying $xy + yz + zx = 9$ , we're supposed to prove the inequality $(1+x^2)(1+y^2)(1+z^2) \ge 64$ . My first approach was something like C-S or Hölder, since we're multiplying sums together. Another approach was to realize that $(1+x^2)(1+y^2) \ge (x+y)^2$ , so it would suffice to prove $(x+y)^2(z^2+1) \ge 64$ , but from here I couldn't make much progress. An "obvious" observation I made was $x^2 + y^2 + z^2 \ge xy + yz + zx \implies (x+y+z)^2 \ge 27$ , but again I am stuck. Any help would be much appreciated.
|
Just another way, using CS inequality: $$(1+x^2)(1+y^2)(1+z^2)=\left((x+y)^2+(xy-1)^2 \right)(z^2+1) \\ \geqslant \left((x+y)z+(xy-1)\right)^2=64$$ Note, here we need an estimate tighter than $(1+x^2)(1+y^2)\geqslant (x+y)^2$ , motivating the expression $(x+y)^2+(xy-1)^2$ .
|
|inequality|contest-math|cauchy-schwarz-inequality|
| 0
|
Why does $x^{\frac{3}{4}}$ have a domain of $x≥0$, while $x^{\frac{6}{8}}$ has a domain of $x\in{\Bbb{R}}$?
|
Why does the simplified version of $x^{\frac{6}{8}}$ have a domain of $x≥0$ while the unsimplified has a domain of $x\in{\Bbb{R}}$ ? Shouldn't they have the same domain being that they are the same expression? If a negative number were to be plugged into $x^{\frac{3}{4}}$ then it would take the 4th root of a negative number thus yielding imaginary numbers. But an unsimplified version of the same expression ( $x^{\frac{6}{8}}$ ) would yield a positive number as it would take the 8th root of a positive number.
|
They should both have the same domain because both exponents are, in fact, equal. The domain stated here for the un-simplified faction is (I believe) wrong. $\frac{6}{8}=\frac{3}{4}$ and therefore $x^{\frac{6}{8}}=x^{\frac{3}{4}}$ . I know some textbooks say differently on this, but I know of one (Cambridge University Press, Australian version) that fixed this in a more recent edition.
|
|functions|exponentiation|
| 0
|
a limit of a complex function of $\zeta(s)\zeta(s+1)\Gamma(s)$
|
$$f(s) = \zeta(s)\zeta(s+1)\Gamma(s) $$ This has a double pole at $s=0$ , from $\zeta(s+1)$ and $\Gamma(s)$ respectively, and $\lim_{s\to0}s^2f(s) = -1/2$ Then, the next step, I have difficulty with understanding the following (from my book without any explanation) $$ \lim_{s\to0} s\left(f(s) - \frac{-1/2}{s^2}\right) = -\log\sqrt{2\pi} \;(= \zeta'(0)) $$ Can anyone explain why?
|
Firstly, observe that $$ \frac{{{\rm d}(t^2 f(t))}}{{{\rm d}t}} = t^2 f(t)\frac{{{\rm d}\log (t^2 f(t))}}{{{\rm d}t}} = t^2 f(t)\left( {\frac{2}{t} + \frac{{\zeta '(t)}}{{\zeta (t)}} + \frac{{\zeta '(t + 1)}}{{\zeta (t + 1)}} + \frac{{\Gamma '(t)}}{{\Gamma (t)}}} \right). $$ Utilising the Laurent series expansions of the zeta and gamma functions, we obtain $$ \frac{{\zeta '(t + 1)}}{{\zeta (t + 1)}} = - \frac{1}{t} + \gamma + o(1),\quad \frac{{\Gamma '(t)}}{{\Gamma (t)}} = - \frac{1}{t} - \gamma + o(1) $$ as $t\to0$ . Taking into account that $$ \lim _{t \to 0} t^2 f(t) = - \frac{1}{2},\quad \frac{{\zeta '(0)}}{{\zeta (0)}} = - 2\zeta '(0) = 2\log \sqrt {2\pi } , $$ we arrive at the desired result.
|
|combinatorics|complex-analysis|complex-integration|
| 1
|
Convex optimization with some cubic and quartic constraint?
|
I am an engineer who is currently working with some network optimization problem. In my work, I encounter a strange optimization problem that seems to be convex but it has some cubic and quartic constraint. The problem look like this $\begin{array}{*{20}{c}} {\mathop {\min }\limits_{x,y,z,t,y} }&{x + y + z + t + y}\\ {}&{{a_1}{x^3} + {a_2}{y^3} + {a_3}{z^3} - t \le 0}\\ {}&{{b_1}{x^4} + {b_2}{y^4} + {b_3}{z^4} - y \le 0}\\ {}&{x + y + z + t + y \ge 1}\\ {}&{...{\rm{and}}\,{\rm{some}}\,{\rm{linear}}\,{\rm{constraints}}} \end{array}$ Here all of $a_1,a_2,a_3,a_4$ and $b_1,b_2,b_3,b_4$ are just some positive number. The decision variable $x,y,z,t,y$ are non negative. What should I do to deal with this type of problem ?. Edit: In some situations, the objective function is not linear but may be a sum of quadratic and cubic terms ${x^2} + {y^2} + {z^2} + {t^3} + {y^3}$ Thank you very much !
|
For a model to be convex, both the objective function and the feasible region must be convex. The objective function $x + 2y + z + t$ is convex because it is linear, so we can get that out of the way. For a constraint to be convex, its eigenvalues for the hessian must be positive. In other words, there are two definitions we need to use: A matrix is called positive semi-definite if it is symmetric and all its eigenvalues are non-negative. If the Hessian matrix is positive semi-definite at all points in a set, then the function is convex in that set. Now let us look at each constraint individually, The constraint $x + 2y + z + t \ge 1$ is linear, so that is done. For $a_1x^3+a_2y^3+a_3z^3−t\le0$ , the Hessian is: \begin{bmatrix} 6a_1x & 0 & 0 & 0\\ 0 & 6a_2y & 0 & 0\\ 0 & 0 & 6a_3z & 0\\ 0 & 0 & 0 & 0\\ \end{bmatrix} In which the eigenvalues (the diagonals) are non-negative since $x,y,z\ge0$ and $a_1,a_2,a_3>0$ , therefore this is convex. Likewise, for constraint $b_1x^4+b_2y^4+b_3z^4−y
|
|optimization|nonlinear-optimization|
| 1
|
Ways to tackle the integral $\int_{0}^{\frac{\pi}{4}}\operatorname{Li}_3(\tan^4 x) \, dx$
|
$$\boxed{J = \int_0^{\frac{\pi}{4}}\operatorname{Li}_3(\tan^4(x)) \, dx}$$ Since I had no clue about trilogarithms, tried some searching to get enough understanding to solve the above integral, I found this general relation; $$\operatorname{Li}_s(z)=\frac{\Gamma(1-s)}{2\pi^{1-s}}\left(i^{1-s}\zeta\left(1-s,\frac{1}{2}+\frac{\ln(-z)}{2\pi i}\right)+i^{s-1}\zeta\left(1-s,\frac{1}{2}-\frac{\ln(-z)}{2\pi i}\right)\right)$$ Also, some general functional equations from here , specifically, $$\operatorname{Li}_3(z)+\operatorname{Li}_3(-z)=\frac{1}{4}\operatorname{Li}_3(z^2)$$ $$\operatorname{Li}_3(z)-\operatorname{Li}_3(-z^{-1})=\frac{-1}{6}\left(\ln^3 z+\pi^2 \ln z\right)$$ Or rewriting the above as; $$\operatorname{Li}_3(z)-\operatorname{Li}_3(-z^{-1})=\frac{-1}{6}\ln^3 z-\zeta(2)\ln z$$ Understanding any other aspects about trilogarithms (or polylogarithm in general) required knowledge was beyond my scope. So I started solving the integral as follows; Using some prior experience in solving
|
A more generalized integral: For $q,p\in\mathbb{Z}_{\ge1}$ with $q+p$ is even number, we have \begin{gather} \int_0^{\frac{\pi}{4}}\ln^{q-1}(\tan(x))\operatorname{Li}_p(\tan^4(x))\mathrm{d}x=-(1-(-1)^q)2^{2p-3}(1+2^{-p})|E_{q-1}|\left(\frac{\pi}{2}\right)^{q}\eta(p)\\ -(q-1)!2^{2p-2}\sum_{k=0}^{\lfloor{\frac{q-2}{2}}\rfloor}\binom{q+p-2k-2}{p-1}\frac{|E_{2k}|}{(2k)!}\left(\frac{\pi}{2}\right)^{2k+1}\lambda(q+p-2k-1)\\ -(q-1)!2^{2p-1}\sum_{k=0}^{\lfloor{\frac{q}{2}}\rfloor}\binom{q+p-2k-1}{p-1}\lambda(2k)\beta(q+p-2k)\\ -(q-1)!2^{2p}\sum_{k=0}^{\lfloor{\frac{p}{2}}\rfloor}\binom{q+p-2k-1}{q-1}2^{-4k}\zeta(2k)\beta(q+p-2k), \end{gather} where $\lfloor{\cdot}\rfloor$ is the floor function, $E$ is the Euler number, $\eta(s)=\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n^s}=(1-2^{1-s})\zeta(s)$ is the Dirichlet eta function, $\lambda(s)=\sum_{n=0}^\infty\frac{1}{(2n+1)^s}=(1-2^{-s})\zeta(s)$ is the lambda function, and $\beta(s)=\sum_{n=0}^\infty\frac{(-1)^n}{(2n+1)^s}$ is the Dirichlet beta function
|
|calculus|integration|definite-integrals|contest-math|special-functions|
| 0
|
Finding vector equation of a line
|
Show that the equation of a straight line passing through the point with position vector $\vec{b}$ and perpendicular to the line $\vec{r}=\vec{a}+\mu \vec{c}$ is of the form $\vec{r}=\vec{b}+\beta \vec{c}×\{(\vec{a}-\vec{b})×\vec{c}\}$ . How to derive the vector parallel to the required line? I get that this vector must be perpendicular to $\vec{c}$ but I can't derive the $\vec{c}×\{(\vec{a}-\vec{b})×\vec{c}\}$ form.
|
In terms of unknown, it may be abstract. Let's recall in our first multivariable class, with numerical example. Like find a line passing through $b=(\pi,1,2)$ such that it is perpendicular to the line $$\ell:(1,2,3)t+(1,1,1)$$ How do you solve it? You first identify that the set of vectors perpendicular to $\ell$ must satisfy $v\cdot(1,2,3)=0$ , so this gives a plane with $c:=(1,2,3)$ as normal vector (equivalently $x+2y+3z=0$ ). Now we want a line passing through $b$ , so we should adjust,parallelly, to $\Pi:x+2y+3z=D$ so that $b$ lies on $\Pi$ . Now we need to choose one line that passing through $b$ and lie on $\Pi$ . How can you choose? Possibly, you identify that $a:=(1,1,1)$ is on $\ell$ , but $a$ may not lie on $\Pi$ , so is $a-b$ . Hence we need to consider the projection of $a-b$ on $\Pi$ . Then this gives the directional vector of the line we need. Okay finish numerical example. Now can you construct your proof based on this idea?
|
|vector-spaces|inner-products|coordinate-systems|cross-product|
| 0
|
How to represent this limit:$\lim\limits_{n\to\infty}\left(\frac{n^n}{n!}\prod_{k=1}^n\frac{x+\frac{n}{k}}{x^2+\frac{n^2}{k^2}}\right)^{\frac{x}{n}}$?
|
$$f(x)= \lim_{n\rightarrow\infty} \left(\dfrac{n^n(x+n)(x+\dfrac{n}{2})\cdots(x+\dfrac{n}{n})}{n!(x^2+n^2)(x^2+\dfrac{n^2}{2^2})\cdots(x^2+\dfrac{n^2}{n^2})}\right)^{x/n} , \quad x>0$$ How can I represent this limit in a simple form? I tried that above fomula $\left(\dfrac{\prod\limits_{k=1}^n \left(\dfrac{kx}{n}+1\right)} {\prod\limits_{k=1}^n \left(\dfrac{k^2 x^2}{n^2}+1\right)}\right)^{x/n}$ help me.
|
Cauchy D'Alembert's Law If $a_n$ is a sequence of real numbers and if $\lim\limits_{n \to \infty} \frac{a_{n+1}}{a_n} =l$ the $\lim\limits_{n \to \infty} \sqrt[n]{a_n}=l$ $\left(\frac{n^n}{n!}\right)^{\frac{x}{n}}=e^x$ This can be proved via Cauchy D'Alembert's Law. Now Lets return to the big fraction i.e $\lim\limits_{n\to\infty}\left(\prod\limits_{k=1}^n\frac{x+\frac{n}{k}}{x^2+\frac{n^2}{k^2}}\right)^{\frac{x}{n}}$ , Let $a_n(x)= \prod\limits_{k=1}^n\frac{x+\frac{n}{k}}{x^2+\frac{n^2}{k^2}}$ . to make the notations less messier I will write $a_n$ instead of $a_n(x)$ . $$b_n:=\frac{a_{n+1}}{a_n}=\frac{x+1}{x^2+1}\prod\limits_{k=1}^n \left[ \frac{\left(x+ \frac{n+1}{k}\right)\left(x^2+\frac{n^2}{k^2}\right) }{\left(x+\frac{n}{k}\right)\left(x^2+ \frac{(n+1)^2}{k^2}\right)}\right]$$ $$=\frac{x+1}{x^2+1}\prod\limits_{k=1}^n \left[1+\frac{1}{kx+n} \right]\left[1+\frac{2n+1}{k^2x^2 +n^2} \right]^{-1}$$ Let $c_n =\prod\limits_{k=1}^n \left[1+\frac{1}{kx+n} \right]$ , $d_n=\prod\limits_{k=1
|
|real-analysis|calculus|limits|
| 0
|
Finding vector equation of a line
|
Show that the equation of a straight line passing through the point with position vector $\vec{b}$ and perpendicular to the line $\vec{r}=\vec{a}+\mu \vec{c}$ is of the form $\vec{r}=\vec{b}+\beta \vec{c}×\{(\vec{a}-\vec{b})×\vec{c}\}$ . How to derive the vector parallel to the required line? I get that this vector must be perpendicular to $\vec{c}$ but I can't derive the $\vec{c}×\{(\vec{a}-\vec{b})×\vec{c}\}$ form.
|
Let the required line be $\vec{r} = \vec{b} + \beta \vec{l}$ . Now as per the diagram below, we can say that vector $\vec{p} = (\vec{a}-\vec{b}) \times \vec{c}$ is perpendicular to the plane containing these two lines. With this, we can say that $\vec{l}$ is perpendicular to $\vec{c}$ and $\vec{p}$ . So we can write $\vec{l}$ is parallel to $\vec{c} \times \vec{p}$ . So the line equation becomes $$\vec{r} = \vec{b} + \beta \vec{c} \times ((\vec{a}-\vec{b}) \times \vec{c})$$ EDIT After the comments, I am adding more details here. Assume that these two lines lie in plane $M$ . Now $\vec{a} - \vec{b}$ and $\vec{c}$ lie in plane $M$ . So the vector perpendicular to the plane is $\vec{p}$ . As $\vec{l}$ also lies in plane $M$ , $\vec{l}$ is perpendicular to $\vec{p}$ .
|
|vector-spaces|inner-products|coordinate-systems|cross-product|
| 0
|
Prove⌈a/b⌉ ≤ a/b + (b-1)/b
|
For integers $a, b > 0$ , Prove $⌈a/b⌉ ≤ (a + (b-1))/b$ RHS $= a/b + (b-1)/b $ where $ (b-1)/b $ is $[0,1)$ If $a/b$ is an integer, inequality holds true as we are adding non-negative term. If $a/b$ is not an integer, $⌈a/b⌉ -- Equation 1 How to demonstrate that switching 1 with some smaller number $(b-1)/b$ leads to the $ transforming to $≤$ in equation 1. Similarly, prove $⌊a/b⌋ ≥ (a - (b-1))/b$
|
Assuming $a$ and $b$ are positive integers, that's not a good approach for it. Instead: write $a=qb+r$ with $0\leq r\lt b$ . Then $$\left\lceil\frac{a}{b}\right\rceil = \left\{\begin{array}{ll} q &\text{if }r=0,\\ q+1 &\text{if }r\gt 0. \end{array}\right.$$ The case "if $\frac{a}{b}$ is an integer is the case $r=0$ and you are fine. In the second case, note that $$\left\lceil\frac{a}{b}\right\rceil = \frac{a}{b} + \frac{b-r}{b}.$$ Go from there. Or you can deal with both cases simultaneously by just considering $\frac{a+(b-r)}{b}$ . Do something similar for $\left\lfloor \frac{a}{b}\right\rfloor$ . If $a$ and $b$ are not positive integers , then it is false. Take $a=0.1$ , $b=0.2$ . Then $\frac{a}{b}=\frac{1}{2}$ , so $\left\lceil\frac{a}{b}\right\rceil = 1$ . But $$\frac{a+(b-1)}{b} = \frac{0.3-1}{0.1} = -\frac{.7}{.1} = -7,$$ and $1$ is not less than or equal to $-7$ .
|
|inequality|proof-writing|ceiling-and-floor-functions|
| 0
|
Prove⌈a/b⌉ ≤ a/b + (b-1)/b
|
For integers $a, b > 0$ , Prove $⌈a/b⌉ ≤ (a + (b-1))/b$ RHS $= a/b + (b-1)/b $ where $ (b-1)/b $ is $[0,1)$ If $a/b$ is an integer, inequality holds true as we are adding non-negative term. If $a/b$ is not an integer, $⌈a/b⌉ -- Equation 1 How to demonstrate that switching 1 with some smaller number $(b-1)/b$ leads to the $ transforming to $≤$ in equation 1. Similarly, prove $⌊a/b⌋ ≥ (a - (b-1))/b$
|
Note $\left\lceil\dfrac{a}{b}\right\rceil=\dfrac{a}{b}+1-\left\{\dfrac{a}{b}\right\}$ , where $\{\}$ is the fractional part of the number. So the question reduces to $$1-\left\{\dfrac{a}{b}\right\}\le\dfrac{b-1}{b}\iff \left\{\dfrac{a}{b}\right\}\ge\dfrac{1}{b}$$ This is so obvious. In case you don't know, then Suppose $(a,b)=1$ , then $\left\{\dfrac{a}{b}\right\}$ must be a fraction with denominator $b$ . Now suppose $(a,b)>1$ , then the fraction part is a fraction with denominator $d:=\dfrac{b}{(a,b)}$ . Then it must be at least $\dfrac{1}{d}\ge\dfrac{1}{b}$ .
|
|inequality|proof-writing|ceiling-and-floor-functions|
| 1
|
An integer and its inverse modulo prime, both less than half of the prime
|
Question: A prime number $p$ is mundane if there exist positive integers $a$ and $b$ less than $p/2$ such that $\frac{ab-1}{p}$ is a positive integer. Find, with proof, all prime numbers that are not mundane . My teacher gave me this, and he said it was from his notes. Though, from the terminology, I wouldn't be surprised if this were from a contest. I have found by manually checking the only non-mundane primes are $2, 3, 5, 7, 13$ . I did not check for large values because: a) I don't think the question expected me to do that and b) the below attempt "shows" that it is very difficult for large prime to be non-mundane. Attempt: The above-listed primes are clearly non-mundane. We are trying to show all others are mundane. We basically need to find two numbers $a,b$ whose product is $kp+1$ for $k \in \Bbb N$ such that $1 . However, $1 . So, as we increase $p$ , the number of possible values of $k$ in that range increases (quadratic grows faster than linear), so it is highly probable that
|
There are no larger primes which are non-mundane than what you've indicated. To prove this, there are several cases to handle, with the following only considering odd primes. First, if $p \equiv 3 \pmod{4} \;\to\; p + 1 = 4k$ for some integer $k$ , then $4$ and $k$ will make $p$ mundane unless $4 \ge \frac{p+1}{2} \;\to\; p \le 7$ , i.e., the primes $3$ and $7$ you've already shown. Next, for primes where $p \equiv 5 \pmod{8}$ with $p \ge 29$ (since $p = 5$ and $p = 13$ are non-mundame, as you've noted), consider any even integer from $4$ to $\frac{p-5}{4}$ inclusive, calling it $a_1$ . Note that if its multiplicative inverse, call it $b_1$ , is between $\frac{p+1}{2}$ and $\frac{3p-3}{4}$ inclusive, then $2b_1$ is congruent to a value between $1$ and $\frac{p-1}{2}$ (inclusive), so we can choose $a = \frac{a_1}{2}$ and $b = 2b_1$ . Alternatively, if $b_1$ is an even value between $\frac{3p+1}{4}$ and $p-1$ , inclusive, then since $2a_1 \lt \frac{p}{2}$ , we can choose $a = 2a_1$ and $
|
|elementary-number-theory|prime-numbers|contest-math|
| 0
|
Proving Lemmas Related to Outer Measure When Measurability Is Not Guaranteed
|
In Royden's Real Analysis, the problem given on page 43 as problem 18 used to be as follows: "Let $E$ have finite outer measure. Show that there exists an $F_\sigma$ set $F$ and a $G_\delta$ set $G$ such that $F \subseteq E \subseteq G$ and $m^*(F) = m^*(E) = m^*(G)$ ." But according to the errata, the problem has been revised to the following: "Let $E$ have finite outer measure. Show that there exists a $G_\delta$ set $G$ such that $E \subseteq G$ and $m^*(E) = m^*(G)$ . Show that E is measurable if and only if there exists an $F_\sigma$ set $F$ such that $F \subseteq E$ and $m^*(F) = m^*(E)$ ." I assume that the problem has been revised because the finite outer measure of a set without the set's measurability doesn't necessarily ensure the existence of such an $F_\sigma$ set. Likewise, even though it is true that: "If $E$ is a set of real numbers of finite outer measure, then for any $\epsilon > 0$ , there exists an open set $O_\epsilon$ covering $E$ such that $m^*(O_\epsilon) .", We
|
The assertion that $$m^*(E)=\inf\{m^*(U):E\subset U,\quad U\text{ is open} \} $$ is called the outer regularity of $m^*$ . On the other hand, its "dual" assertion, that is, $$m^*(E)=\sup\{m^*(K):K\subset E,\quad K\text{ is compact}\} $$ is called the inner regularity of $m^*$ . The outer measure $m^*$ has the outer regularity (which follows immediately from its definition), but it does not have the inner regularity in general. In fact, Lebesgue measurable sets are precisely those sets with the inner regularity. (This is still true even if we replace "compact" with "closed".) So as for your questions, such an $F_\epsilon$ can be found (for all $\epsilon>0$ ) precisely when $E$ is Lebesgue measurable. In this case, the union of $F_{1/n}$ will be the desired $F_{\sigma}$ -set.
|
|real-analysis|
| 0
|
Proving $\Im((\tan(\frac{m\pi}{2n+1})+i)^{2n})=\tan(\frac{m\pi}{2n+1})\Re((\tan(\frac{m\pi}{2n+1})+i)^{2n})$ for integers m and n
|
Method 1 Let $$\alpha=\frac{m\pi}{2n+1}$$ $m,n \in ℤ$ We have $$(\tan(\alpha)+i)^{2n+1}=(\frac{\cos(\alpha)-i\sin(\alpha)}{\cos(\alpha)}i)^{2n+1}=i^{2n+1}(\exp(-i(\frac{m\pi}{2n+1})))^{2n+1}/\cos^{2n+1}(\alpha)$$ $$=i(-1)^{m+n}/\cos^{2n+1}(\alpha)$$ Let $(\tan(\alpha)+i)^{2n}=u+iv$ where $u,v \in ℝ$ Then $(\tan(\alpha)+i)^{2n+1}=(u+iv)(\tan(\alpha)+i)=i(-1)^{m+n}/\cos^{2n+1}(\alpha)$ But $\Re(i(-1)^{m+n}/\cos^{2n+1}(\alpha))=0$ so $\Re((u+iv)(\tan(\alpha)+i))=u\tan(\alpha)-v=0$ Thus, $\Im((\tan(\frac{m\pi}{2n+1})+i)^{2n})=\tan(\frac{m\pi}{2n+1})\Re((\tan(\frac{m\pi}{2n+1})+i)^{2n})$ Method 2 Case 1 For the $n=-1$ case it's easy to check that indeed $$\Im((\tan(\frac{m\pi}{2(-1)+1})+i)^{2(-1)})=\tan(\frac{m\pi}{2(-1)+1})\Re((\tan(\frac{m\pi}{2(-1)+1})+i)^{2(-1)})$$ for any arbitrary integer $m$ , as both the LHS and the RHS will be zero Case 2 Similarly, in the case where $n=0$ it's trivial to check that the equality holds Case 3 Let $z=x+iy$ , let $m$ $\in$ $ℤ$ and let $n$ $\in$ $ℤ\set
|
Let $$a := \cos\frac{m\pi}{2n + 1}, \quad b := \sin\frac{m\pi}{2n + 1}.$$ We have $$(a \pm \mathrm{i}b)^{2n + 1} = (-1)^m. \tag{1}$$ Let $u, v \in \mathbb{R}$ such that $$\left(\frac{b}{a} + \mathrm{i}\right)^{2n} = u + \mathrm{i} v.$$ We have $$\left(\frac{b}{a} + \mathrm{i}\right)^{2n + 1} = \left(u + \mathrm{i} v\right)\left(\frac{b}{a} + \mathrm{i}\right)$$ or $$\frac{(a - \mathrm{i}b)^{2n + 1}\mathrm{i}^{2n + 1}}{a^{2n + 1}} = \left(\frac{ub}{a} - v\right) + \mathrm{i}\left(\frac{vb}{a} + u\right)$$ or (using (1)) $$\frac{(-1)^m(-1)^n \mathrm{i}}{a^{2n + 1}} = \left(\frac{ub}{a} - v\right) + \mathrm{i}\left(\frac{vb}{a} + u\right)$$ which results in $$\frac{ub}{a} - v = 0.$$ We are done.
|
|solution-verification|complex-numbers|
| 1
|
Fubini's theorem for Bochner Integral
|
I've just been (as of two days ago) introduced to the Bochner integral, and I've read that Fubini's theorem holds for it, but I haven't been able to find its version for the said integral. So here's my question: Let $(X,\Sigma,\mu),(Y,\Omega,\upsilon)$ be $\sigma$ -finite measure spaces, $E$ a Banach space and $f:X\times Y\rightarrow E$ a Bochner integrable function. I've seen (through here , page 11, theorem 1.19) that a closed linear operator (as I think is the case for the integral of Bochner integrable functions) commutes with the Bochner integral. Can I then affirm that $$ \int_{X}\left(\int_{Y}f\ d\upsilon\right)d\mu=\int_{Y}\left(\int_{X}f\ d\mu\right)d\upsilon$$ ? I apologize in advance if anything is wrong or nonsense. As I said, I've just come in contact with this subject.
|
I think that your idea does not work immediately. Indeed, if $f \in L^1(X; F)$ , you can take a bounded linear operator $A \colon F \to G$ and have $$ \int_X A f(x) \, \mathrm{d}x = A \int_X f(x) \, \mathrm{d}x. $$ This does not yield your desired formula in case $f \in L^1(X \times Y; E)$ , since the integral over $Y$ is not a linear operator on the space $X$ . However, the space $L^1(X \times Y; E)$ is isometrically isomorphic to $L^1(X; L^1(Y;E))$ . Let $I \colon L^1(Y;E) \to E$ be the Bochner integral. Then, $$ \int_X \int_Y f \, \mathrm{d}y \, \mathrm dx = \int_X I f \,\mathrm dx = I \int_X f \,\mathrm dx = \int_Y\int_X f \,\mathrm{d}x\,\mathrm{d}y $$ for all $f \in L^1(X; L^1(Y;E))$ . One should be aware that the proof of the mentioned identification of spaces might already require some Fubini-type argument (I do not remember exactly...)
|
|functional-analysis|banach-spaces|fubini-tonelli-theorems|
| 1
|
Show that weakly convergence of probability measures on countable space
|
Let $(X,\mathcal{X})$ be a metric space where assume that $X$ is countable and discrete. Show that probability measures $P_n$ converges to $P$ weakly (i.e., for all bounded continuous functions on $X$ , one has $\int f dP_i\to \int f dP$ ) if and only if $P_n(\{x\})\to P(\{x\})$ as $n\to \infty$ . For one direction, assume that $P_n$ converges to $P$ weakly. We choose $f(x)=I[\{x\}]$ . Then $$ \int f dP_n=\int I[\{x\}] dP_n=P_n(\{x\})\to \int fdP=P(\{x\}) $$ But how to claim that $I[\{x\}]$ is continuous? (It is bounded by 1). On the other hand, assume that $P_n(\{x\})\to P(\{x\})$ as $n\to \infty$ . Assume that $|f|\le C$ for some $C>0$ . Let $X=\{x_1,x_2,\dots\}$ . Then $$ |\int f dP_n- \int f dP|=|\sum_if(x_i)P_n(x_i)-\sum_if(x_i)P(x_i)|\le C\sum_{x_i}|P_n(x_i)-P(x_i)| $$ Then as $n\to \infty$ , $$ \lim_{n\to\infty}\sum_{i=1}^\infty|P_n(x_i)-P(x_i)|=\sum_{i=1}^\infty\lim_{n\to\infty}|P_n(x_i)-P(x_i)|=0 $$ (How to verify it?) We use the fact that every open $G$ set is the countable u
|
$\sum (P(x_i)-P_n(x_i))^{+} \to 0$ by dominated Convergence Theorem since $0 \le (P(x_i)-P_n(x_i))^{+} \leq P(x_i)$ and $\sum P(x_i) . Also, $\sum (P(x_i)-P_n(x_i)) =1-1=0$ . Since $x^{-}=x^{+}-x$ we get $\sum (P(x_i)-P_n(x_i))^{-} \to 0$ . Finally, $|x|=x^{+}+x^{-}$ so $\sum |P(x_i)-P_n(x_i)| \to 0$ .
|
|real-analysis|
| 0
|
What's the purpose of the KKT condition when first-order optimality condition exists?
|
Given a convex optimization problem $$\min f(x), x \in D$$ $f, D$ convex. The first-order optimality condition says $x$ is the minimizer if and only if $\nabla f(x)^T (x-y) \geq 0, \forall y\in D.$ For unconstrained problems, this is $\nabla f(x) = 0$ . This seems to be perfectly suited for finding minimizers $x$ . So why bother with KKT conditions ? We already have an equation which we can use to solve for minimizers $x$ . Furthermore, KKT condition is just sufficient (but might not be necessary), so it is an inferior version as compared to the first-order optimality condition. Can someone shed light as to why we might care about KKT conditions even though we have first-order optimality condition?
|
... because it is easier to check whether the KKT conditions are satisfied compared to the first-order conditions. In fact, you only need some set of multipliers, plug it into the set of equations and see if it fits. For the first-order conditions, you have to check every $y \in D$ .
|
|optimization|convex-analysis|convex-optimization|numerical-optimization|non-convex-optimization|
| 0
|
why is $\sum_{n=0}^\infty \frac{(3\log2)^n}{(n+1)!}$ = 8 GRE subject problem
|
$$\sum_{n=0}^\infty \frac{(3\log2)^n}{(n+1)!} = 8 $$ Hi, this is from the GRE subject practice test. I know the answer but don't understand why it is so.. My guess is $\sum_{n=0}^\infty \frac{x^n}{n!}= e^x$ so $e^{3\log2}= e^{\log(2^3)}=8$ . I tried $\sum_{n=0}^\infty \frac{(3\log2)^n}{(n+1)!} = \sum_{n=1}^\infty \frac{(3\log2)^{n-1}}{(n)!}$ but wasn't able to seperate 1 from $(3\log2)^{n-1}$ to get something like $\sum_{n=0}^\infty \frac{x^n}{n!}$
|
The statement is wrong. We have $$ \begin{align*} \sum_{n=0}^\infty\frac{(3\ln 2) ^n}{(n+1)! }&=\sum_{n=1}^\infty\frac{(3\ln 2) ^{n-1}}{n! }\\ &=\frac{1}{3\ln 2}\sum_{n=1}^\infty\frac{(3\ln 2) ^{n}}{n! }\\ &=\frac{1}{3\ln 2}\left(-\frac{(3\ln 2)^0}{0!}+\sum_{n=0}^\infty\frac{(3\ln 2) ^{n}}{n! }\right)\\ &=\frac{1}{3\ln 2}(-1+8) \\ &=\frac{7}{3\ln 2}
|
|real-analysis|calculus|
| 1
|
Fubini's theorem for Bochner Integral
|
I've just been (as of two days ago) introduced to the Bochner integral, and I've read that Fubini's theorem holds for it, but I haven't been able to find its version for the said integral. So here's my question: Let $(X,\Sigma,\mu),(Y,\Omega,\upsilon)$ be $\sigma$ -finite measure spaces, $E$ a Banach space and $f:X\times Y\rightarrow E$ a Bochner integrable function. I've seen (through here , page 11, theorem 1.19) that a closed linear operator (as I think is the case for the integral of Bochner integrable functions) commutes with the Bochner integral. Can I then affirm that $$ \int_{X}\left(\int_{Y}f\ d\upsilon\right)d\mu=\int_{Y}\left(\int_{X}f\ d\mu\right)d\upsilon$$ ? I apologize in advance if anything is wrong or nonsense. As I said, I've just come in contact with this subject.
|
It follows from the usual Fubini theorem. Let $p\in E^*$ be a bounded linear functional. Then, by the usual Fubini theorem and the fact that you mentioned, $$p\int_{X}\int_{Y}f\ \mathrm d\upsilon\ \mathrm d\mu= \int_{X}p \int_{Y}f\ \mathrm d\upsilon\ \mathrm d\mu=\int_{X}\int_{Y}pf\ \mathrm d\upsilon\ \mathrm d\mu$$ $$=\int_{Y}\int_{X}p f\ \mathrm d\mu\ \mathrm d\upsilon=\int_{Y}p\int_{X}f\ \mathrm d\mu\ \mathrm d\upsilon=p\int_{Y}\int_{X}f\ \mathrm d\mu\ \mathrm d\upsilon$$ Since bounded linear functionals separate the elements of $E$ by the Hahn-Banach theorem, we get $$\int_{X}\int_{Y}f\ \mathrm d\upsilon\ \mathrm d\mu=\int_{Y}\int_{X}f\ \mathrm d\mu\ \mathrm d\upsilon.$$
|
|functional-analysis|banach-spaces|fubini-tonelli-theorems|
| 0
|
Segal subdivision
|
I call Segal subdivision the endofunctor of simplicial objects in a category $\mathcal{C}$ induced by the doubling endofunctor of $\Delta^{op}$ sending $x_0 to $x_0 . I am calling this Segal subdivision as in Weibel's K-book chapter IV exercise 3.10. Here is Segal's original paper , where what I am discussing is the content of the first appendix. It turns out that if we consider a simplicial space $A$ and denote its Segal subdivion by $sub(A)$ , this yields homotopy equivalent spaces after geometric realization. I am not managing to complete the proof provided in the paper. If somebody knows of a ressource with (or is willing to write up) a more detailed proof this would be greatly welcome and would (obviously) solve my problem. However, not wanting to ask too much, I do have a more specific question in mind, which I am hoping will be enough to help me. Why is the proof so unsimplicial? What I mean is the the homotopy equivalence does not at arise at the simplicial level i.e $n$ simpli
|
I have after scouring the internet found an alternative proof which is much more "simplicial" and in my opinion more satisfying. It can be found as lemma 5.2 in these slides by Jardine. This proof in addition to being more in line with my personal taste lends itself better to potential generalizing and philosophizing, instead of feeling like a coincidence.
|
|proof-explanation|simplicial-stuff|
| 1
|
Sum of Independent exponential random variables 1.0
|
Help with Exponential Distribution Exercise: Struggling to Eliminate -1 $Let ( X_i \sim \text{Exp}(\theta_1) and\ ( X_j \sim \text{Exp}(\theta_2) ), determinate ( X_i + X_j = U ).$ $\int_0^u \theta_1 e^{-\theta_1*k} \cdot \theta_2e^{-\theta_2u+\theta_2k} dk $ $\theta_1\theta_2e^{-\theta_{2}u}\int_0^u e^{\theta_2k-\theta_1k}dk $ after of evaluate $\theta_1\theta_2e^{-\theta_{2}u} \cdot \dfrac{1}{\theta_1\theta_2}(e^{(\theta_1-\theta_2)u}-{e^0}) $ $\theta_1\theta_2e^{-\theta_{2}u} \cdot \dfrac{1}{\theta_1\theta_2}(e^{(\theta_1-\theta_2)u}-{1}) $ The principal problem is that answer is: $\theta_1\theta_2e^{-\theta_{2}u} \cdot \dfrac{e^{(\theta_1-\theta_2)u}}{\theta_1\theta_2} $ I need help, please
|
If $\theta _2>\theta_1$ (say) then for $s$ small enough $$E(e^{s U})=\frac{\theta_1}{\theta_1-s}\frac{\theta_2}{\theta_2-s}=\frac{1}{\theta_2-\theta_1}\left(\theta_2\frac{\theta_1}{\theta_1-s}-\theta_1\frac{\theta_2}{\theta_2-s}\right)$$ $$=\frac{\theta_1\theta_2}{\theta_2-\theta_1}\int_{0}^{\infty}e^{su}\left(e^{-\theta_1 u}-e^{-\theta_1 u}\right)du$$ The density of $U$ is $$\frac{\theta_1\theta_2}{\theta_2-\theta_1}\times \left(e^{-\theta_1 u}-e^{-\theta_2 u}\right)$$
|
|integration|summation|random-variables|exponential-function|independence|
| 0
|
Is there any inequality involving the Frobenius norm and the dimension of matrix?
|
Let $A$ be a $m \times r$ matrix and $B$ be a $r \times n$ matrix, I wonder if there exists an inequality like the following: $$ \left \| AB \right \|_F \leq f(m,r,n)g(A,B) , $$ or $$ \left \| AB \right \|_F \ge f(m,r,n)g(A,B) , $$ where $f(m,r,n)$ is a function of the dimension of matrix $m,r,n$ , and $g(A,B)$ is a function of $A$ or $B$ . For example, does $\left \| AB \right \|_F \leq mr^2n \left \| A \right \|_F$ (here $f(m,r,m) = mr^2n$ is a function of $m$ , $r$ and $n$ , and $g(A,B) = \left \| A \right \|_F$ is a function of $A$ ) ? I looked through the matrix reference books, but could not find any satisfactory answer, only some less relevant ones like this link . Does anyone know the answer?
|
The Frobenius norm is sub-multiplicative since it is an operator norm. Meaning: $$\|AB\|_F \leq \|A\|_F\|B\|_F.$$ Is this what you are looking for? I think this wikipedia article contains good explanations for this.
|
|matrix-norms|linear-matrix-inequality|
| 0
|
Find conditional probability that stock return will exceed some threshold value
|
Suppose we have some financial data, e.g., stock return time series. The theoretical distribution is unknown, while we can construct the empirical distribution through historical data. The problem is to compute the probability that the random process will exceed (i) some $x$ value, and (ii) some $x$ value conditional that current return is equal to $y$ ( $y ). Intuitive, the first and second parts of the problem should yield different results. However, mathematically I get the same. Here is my reasoning. (i): we can compute $P(X>x)$ easily from the empirical CDF. (ii) mathematically the problem is compute $P(X>x|X=y)$ . According to conditional probability, $P(X>x|X=y)=\frac{P(X>x \cap X=y)}{P(X=y)}$ .As we know that current return $y$ is less than the threshold value $x$ (i.e., $y ), the events $X>x$ and $X=y$ are independent (cause these events cannot take place simultaneously). Therefore, $P(X>x|X=y)=P(X>x)P(X=y)$ . As a result, $P(X>x|X=y)=\frac{P(X>x)P(X=y)}{P(X=y)}=P(X>x)$ . Ques
|
Your computations are correct but the conclusions ar wrong.If two events cannot occur simultaneously they are not independent but mutually exclusive, i.e. they are the null event. That event has zero probability of occuring. You can think it like this: "I'm currently at 5. What is the probability of being at 7?" Since I'm at 5 I cannot be at 7 as well, hence the probability is $0$
|
|conditional-probability|finance|
| 1
|
Geometry : $(AD+DE)^2+BD^2=(AB+BE)^2$
|
Let $ABC$ be a right triangle where $\angle C=90^{\circ}$ and $\angle A=10^{\circ}$. Point D and E are on the sides AC and BC respectively such that $\angle ABD = \angle CDE = 60^{\circ}$. Prove that $(AD+DE)^2+BD^2=(AB+BE)^2$ My attempt : Draw line $BD$ and extend $BD$ through $D$ to meet the perpendicular from $A$ at $F$, $AF\perp BD$ Let $ED$ cut $AF$ at point $T$. $\angle CBF = \angle CAF = 20^{\circ} \rightarrow B, C, F, A$ concyclic. so $\angle BCA = \angle BFC = \angle BDE = 10^{\circ}$ so $DE \parallel CF$ and $ET \parallel CF$.
|
Take $F$ reflection of $D$ about $C$ , $\triangle DEF$ is equilateral, take then $G$ on $AB$ so that $FG\bot AC$ to get $FG=FB=BD$ ( $\widehat{FGB}=\widehat{FBG}=80^\circ$ ). Also $\widehat{BFE}=\widehat{BDE}=10^\circ$ . Now take $H$ , reflection of $E$ about $BF$ , see that $\triangle BGH$ is equilateral, so $BG=BH=BE$ , done, since $AG^2=AF^2+FG^2$ .
|
|geometry|euclidean-geometry|triangles|
| 0
|
A gambler's ruin problem with winning size of 3
|
I play a game where I have a $25$ % chance of winning \$ $3$ and a $75$ % chance of losing \$ $1$ . Currently, I have \$ $5000$ . I will stop playing once I either earn \$ $20000$ or lose all of my \$ $5000$ . When I stop, what is the probability of having lost all of my \$ $5000$ ? The winning is not $1$ , so I think I cannot use the formula $$\frac{1-(q/p)^i}{1-(q/p)^N}$$ where $N$ is the wining amount of money, i.e. \$ $5000$ and $i$ is the money we start with. What should be changed in this formula to calculate the probability of bankruptcy?
|
Let $p_j$ be the probability of bankruptcy when starting with initial capital $j$ dollars. We have the recursion $$ p_j = \frac{3}{4}p_{j-1} + \frac{1}{4}p_{j+3} $$ if $1\leq j \leq M-3$ and the conditions $p_0 = 1$ and $p_M=0$ (denoting $M=20000$ ). Re-indexing ( $k=j+3$ ) and transforming the recurrence equation a bit we have $$ p_k = 4p_{k-3} - 3p_{k-4} $$ This is a linear homogenous recurrence with characteristic polynomial $z^4-4z+3$ , which has roots $1$ (a double root) and $-1\pm i \sqrt{2}$ . Let's denote $w = -1+i\sqrt{2} = \sqrt3 e^{i\theta}$ where $\theta = \pi - \arctan \sqrt2 \approx 2.186276$ . So the solution is given $$ p_k = c_1 + c_2k + c_3w^k+c_4 \bar w ^k $$ (Notice that in reality the recurrence doesn't hold anymore when $k>M$ and the solution will give nonsense after that but we don't care since we're only interested in the region $k\in \{0,\dots, M\}$ . Same thing also happens for the usual gambler's ruin) We just have to solve these coefficients. From $p_0=1$ we
|
|probability|stochastic-processes|
| 0
|
Simplification of $ \sum_{n=0}^{\infty} x^{n q} \prod_{k=1}^{t} \left(\sum _{m=0}^n \frac{1}{(x^{a_k})^m}\right) $
|
$$\sum_{n=0}^{\infty} x^{n q} \prod_{k=1}^{t} \left(\sum _{m=0}^n \frac{1}{(x^{a_k})^m}\right)$$ For $t=1$ : \begin{align*} \sum_{n=0}^{\infty} x^{n q} \left(\sum _{m=0}^n \frac{1}{x^{a_1 m}}\right) &= \sum_{n=0}^{\infty} x^{n q} \left( \frac{\frac{1}{x^{(n+1)a_1}}-1}{\frac{1}{x^{a_1}}-1}\right)\\ &= \frac{1}{ \left(\frac{1}{x^{a_1}}-1\right)} \sum_{n=0}^{\infty} x^{n q} \left( \frac{1}{x^{(n+1)a_1}}-1\right)\\ &= \frac{1}{(1-x^q)(1-x^{q-a_1})} \end{align*} For $t=2$ : \begin{align*} &\mathrel{\phantom=} \sum_{n=0}^{\infty} x^{n q} \left(\sum _{m=0}^n \frac{1}{x^{a_1 m}}\right) \left(\sum _{m=0}^n \frac{1}{x^{a_2 m}}\right )\\ &= \sum_{n=0}^{\infty} x^{n q} \left( \frac{\frac{1}{x^{(n+1)a_1}}-1}{\frac{1}{x^{a_1}}-1}\right)\left( \frac{\frac{1}{x^{(n+1)a_2}}-1}{\frac{1}{x^{a_2}}-1}\right)\\ &= \frac{1}{ \left(\frac{1}{x^{a_1}}-1\right) \left(\frac{1}{x^{a_2}}-1\right)} \sum_{n=0}^{\infty} x^{n q} \left( \frac{1}{x^{(n+1)a_1}}-1\right)\left( \frac{1}{x^{(n+1)a_2}}-1\right)\\ &= \frac{1}{
|
We have $$\begin{align}&\sum_{n=0}^{\infty} x^{n q} \prod_{k=1}^{t} \left(\sum _{m=0}^n \frac{1}{(x^{a_k})^m}\right) \\\\&=\sum_{n=0}^{\infty} x^{n q} \prod_{k=1}^{t} \frac{\frac{1}{x^{(n+1)a_k}}-1}{\frac{1}{x^{a_k}}-1} \\\\&=\prod_{k=1}^{t}\frac{1}{\frac{1}{x^{a_k}}-1}\sum_{n=0}^{\infty} x^{n q} \prod_{k=1}^{t}\bigg(\frac{1}{x^{(n+1)a_k}}-1\bigg)\tag1\end{align}$$ Here, letting $y:=x^{-n-1}$ , we have $$\begin{align}&\prod_{k=1}^{t}\bigg(\frac{1}{x^{(n+1)a_k}}-1\bigg) \\\\&=\prod_{k=1}^{t}(y^{a_k}-1) \\\\&=(y^{a_1}-1)(y^{a_2}-1)\cdots (y^{a_t}-1) \\\\&=\sum_{m=0}^t(-1)^{m}\sum_{j=1}^{\binom tm}y^{\sigma_{t-m}(j)}\end{align}$$ where $\sigma_n$ represents the sum of $n$ $a_j$ s out of $a_1,a_2,\cdots,a_t$ . So, we have $$\begin{align}(1)&=\prod_{k=1}^{t}\frac{1}{\frac{1}{x^{a_k}}-1}\sum_{n=0}^{\infty} x^{n q} \sum_{m=0}^t(-1)^{m}\sum_{j=1}^{\binom tm}y^{\sigma_{t-m}(j)} \\\\&=\prod_{k=1}^{t}\frac{1}{\frac{1}{x^{a_k}}-1}\sum_{m=0}^t(-1)^{m}\sum_{j=1}^{\binom tm}\sum_{n=0}^{\infty} x^{n q
|
|sequences-and-series|algebra-precalculus|
| 0
|
Why expression under root has to be positive?
|
I have function defined like this : f(x,y) = $\sqrt[127,5]{\frac{x^²+y^²-4y}{4x-x^2-y^2}}$ I thouth that domain is $4x-x^2-y^2 \neq 0$ but when I looked on wolfram, the domain is everything under the root has to be $\gt0$. Why mine result is wrong ? I thougth of this number $127,5$ like $\frac{10}{1275}$ and that would be translated into $\frac{2}{255}$. From my point of view, that should be equal to $\sqrt[255]{(\frac{x^2+y^2-4y}{4x-x^2-y^2})^2}$. If I am not wrong, 255 is not even. Even, if that number is even, square should take care of negative result in expression. Did I took something for granted that I shouldn't or I forgot something ?
|
The "bit under the root" has a name, in case you want to search further - it is called a $radicand$ . The fundamental reason (from my undergraduate days, things may have changed) is that $y=\sqrt{x}$ is defined to be the positive solution to $x=y^{2}$ but this is by convention . Later, roots such as $8^{\frac{1}{3}}$ are sometimes defined (in complex space) to be principal roots which are not always real. So, ultimately it is a matter of convention, but conventions need to be followed.
|
|exponentiation|radicals|
| 0
|
How is the centralizer in $\mathrm{GL}_n(k)$ a torus
|
I have the following definition: Suppose that $g \in \mathrm{GL}_n(k)$ is regular and semisimple. Define $T_g := \mathrm{Cent}_{\mathrm{GL}_n(k)}(g)$ to be the centralizer of $g$ in $\mathrm{GL}_n(k)$ . My advisor and I managed to proof that we have $T_g \cong k[g]^\times$ as multiplicative subgroups of $\mathrm{GL}_n(k)$ . Now my advisor claims that $T_g$ is in fact an algebraic torus , i.e. a finite product of copies of $k^\times$ , but how so? The multiplication in a product $(k^\times)^r$ is defined entrywise, but the action of $g$ on $k[g]^\times$ is very weird, especially $g \cdot g^{n-1}$ .
|
To get it of the unanswered list: the claim that $T_g \cong (k^\times)^r$ is generally wrong. As @Alex Youcis mentioned, one has to pass to an algebraic closure $\bar k$ of $k$ , which is also required in the definition of an algebraic torus. So assume that $T_g \cong k[g]^\times$ . Since $g$ is regular semisimple, all its Eigenvalues in $\bar{k}$ are distinct, and so $\mathrm{char}(g;X) \in k[X]$ is already its minimal polynomial. This gives $k[g] \cong k[X] / \mathrm{char}(g;X)$ . Moreover, $\mathrm{char}(g;X)$ does not have multiple roots in $\bar k$ , whence it has no multiple irreducible factors over $k$ . Write $\mathrm{char}(g;X) = f_1(X) \dotsm f_r(X)$ where $f_1, \dots, f_r$ are pairwise distinct and irreducible. Then by the Chinese remainder theorem, $k[g] \cong \prod_{i=1}^r k[X] / f_i(X)$ where $k[X] / f_i(X)$ are fields that are finite over $k$ . This is all we can say over $k$ . If we pass to $\bar k$ , i.e. if we tensor with $\bar k$ , then $\bar k[X] / f_i(X)$ are field
|
|algebraic-groups|
| 1
|
Subsets and Splits
SQL Console for glopezas/math_stackexchange_qa
Filters and displays accepted answers excluding questions with specific tags, providing a focused view of less common or specialized topics.