It looks like you're new here. If you want to get involved, click one of these buttons!

- All Categories 1.6K
- General 40
- Azimuth Project 95
- - Latest Changes 345
- News and Information 222
- - Strategy 93
- - Questions 39
- Azimuth Forum 25
- - Conventions and Policies 21
- Chat 166
- Azimuth Blog 133
- - - Action 13
- - - Biodiversity 7
- - - Books 1
- - - Carbon 7
- - - Climate 41
- - - Computational methods 34
- - - Earth science 21
- - - Ecology 40
- - - Energy 28
- - - Geoengineering 0
- - - Mathematical methods 62
- - - Oceans 4
- - - Methodology 16
- - - Organizations 33
- - - People 6
- - - Reports 3
- - - Software 19
- - - Statistical methods 1
- - - Things to do 1
- - - Visualisation 1
- - - Meta 7
- - - Natural resources 4
- Azimuth Wiki 6
- - - Experiments 23
- - - Sustainability 4
- - - Publishing 3
- Azimuth Code Project 69
- - Spam 1

Options

When we solve a differential equation using software the solutions are converged locally, but in our case for weather related modeling perhaps we might need a global solution specially if the data is noisy:

ANN for scoling ordinary and partial eq

Solving differential equations using neural network

Moreover most differential equation solves are not parallelized, but we could parallelize NN part of the iterations and this solve larger multivariate diff EQ.

Dara

## Comments

Here is a question I have on gradient-style optimization.

Given that we may want to fit to a factor that looks like cos(A*t+B), how do we most efficiently change A and B at the same time? The problem is that a large change in A will impact the phase shift B. So that leads to a slow conversion.

Is it better, for example, to optimize against something like this? cos(A*(t+B/A))

So when A is changed, the phase offset is adjusted automatically.

Does that make sense?

Paul you make sense, but I need to code a specific example to answer your questions as opposed to usual pontification heh heh heh

Let us work some examples tonight, I am a lot freer

Dara

Paul I tell you what, I turn our discussions and code into Enterprise CDF for educational purposes so others will learn the math, numerical methods and symbolic methods.

D

The Solvers in Mathematica, or any other solvers, are sensitive to the simplification and factoring of the terms in the expressions. The results might vary even though the expressions are equivalent!

Specially with the example you gave for very small or large values of A.

Also some of these equations might have multiple solutions which numerically only one is issued by Mathematica. There is no telling that Mathematica finds all possible numerical solutions.

This was a puzzle when I wrote another global maximizer and compared it to Mathematica. To my surprise and I did not see it in any papers, the solutions to SVR global min/max are not unique! In other words if you recall what John asked of minimizing an error might not have a unique solution most of the times.

This is actually very good, since it means our model of the dynamical system is incomplete, therefore there are multiple solutions to the system of equations.

Imagine riding a bike how your brain balances the bike and how mine would are totally different in spite of the fact that we both ride the bike quite similarly.

But if the model to the dynamical system of the bike was complete then either I could ride the bike or you, but no both of us.

Dara

I noticed another issue with the Mathematica Differential Evolution solver in that it prefers to work with range constraints, but it may not do the right shortcuts.

As an example, a good constraint for a phase constant is $0 .. 2\pi$

But the solver seems to get to the range-constraint and not know enough to wrap around, so many times the result is stuck at either 0 or 2$\pi$, which we know can't be right.

That forces me to either put in several cycles or to go with a formulation such as A

sin(kt)+Bcos(kt), which carries an implicit phase -- but this is not always good because it requires a lower constraint of 0 for both A and B.