Problems with task 220

Back to General discussions forum

TobiSonne     2020-11-04 19:52:09

Hello,

I'm trying to solve task 220 for many hours and I always get an error like that one:

For input (-0.354678854764 0.364928694202 0.755177848283 0.597229339371) your result 0.167631289817 is too far from expected -0.710182743619.

Sometimes I can't find the expected number in the test data, here -0.710182743619. Sometimes this value is not in the last column. Is this intendet?

When I test my solution inside R everything is fine, I get a really small mean average squared difference and my biggest absolute deviation between Yreal and Yexp is way lower than 0.04.

Maybe I have some issues understandig the purpose of S.

What is meant by 'these values' in 'multiplying these values by 0.5 for example' and what is meant by 'it' in 'Of course when NN yields result it should be converted back'?

I think it's very difficult to understand.

Or does it have anything to do with the sigmoid? I use 2 / (1 + exp(-Xsum * 2)) - 1 like in the task 219.

Thank you very much!

Best regards,

Tobi

Alexandr Milovantsev     2020-11-05 02:46:28

I suppose that you misunderstood usage of S parameter, as I did.

While calculating neural network response, you should divide by S to scale the response, not multiply. When I noticed that, I easily passed this task.

TobiSonne     2020-11-05 09:44:54

Thank you so far!

But which numbers should I divide by S and at which point?

Suppose the first line of my test data is 0.1 0.2 0.3 0.4 0.5.

Should I divide 0.1 to 0.4 by S?

Or suppose after the hidden layer my results with my original data ranges from -0.7 to 0.7.

Should I divide them by S (i.e. 0.7) to get a range from -1 to 1?

And should I do this after the output layer, too?

I really do not understand it and nearly whatever I do leads to excellent results in R but sometimes really big deviations in the message here.

Alexandr Milovantsev     2020-11-05 15:14:30

At first: there is no hidden layer.

The task formulation suppose that you multiply by S every Yreal part of input training data set and than train NN against this scaled data set.

The purpose of the scaling of input data set is to make it swing not in [-1;1], but in [-0.5,0.5] (if S=0.5 for example), to made output swing more accessible to limited output capability of the sigmoid function of the output neuron.

Please, notice that you don't touch x[i] part of input dataset with this scale factor S.

For me it was easier to imagine, that I divide by S response of the output neuron.

TobiSonne     2020-11-06 09:24:13

Thank you! I've tried it and it works really fine. In R ...

I get the error message For input (0.963349954204 0.540292595507 0.985639513169 0.240575372757) your result -0.960083128342 is too far from expected 0.787907142212.

The expected number should be in the LAST column, shouldn't it? But it's in the third.

This is the line of the test data where it is from:

0.504420042158 0.97061926781 0.787907142212 0.808296575763 0.186444601068.

Could it be that there is a bug?

Alexandr Milovantsev     2020-11-06 11:37:39

Because I don't know R, I can not tell, is there bug in your code or there isn't. Also, checker for this task is much simpler then solver, so it is very unlikely that bug is in checker.

The only thing with which I can help you is to try to solve the same data set as you and compare response for my NN of failed data, on which checker blame.

That is the plan:

  1. You take training data set, solve it, send back to site and write down the blame string of checker.
  2. You publish here used data set and entire blame string.
  3. I try to solve your data set and build my NN.
  4. I feed X[] part from your blame string to my NN and publish my response.
  5. ...
  6. Profit! :)
TobiSonne     2020-11-15 12:56:14

Thank you very much! :) And sorry for my late answer.

This is my answer: 4 0.5 0.124217512871159 0.846418021849998 -0.481297943915312 0.0258415642713306 0.318843836265659 0.0250444359698714 -0.0989142431152183 0.24327809138823 -0.0941586032409721 -0.420096832890147 0.619665962446582 -0.200784598325896 -0.221676294351012 -0.394736216295904 0.227236561401508 -0.433740941670469 -0.455932542886725 -0.835545802711364 0.601139807578917 -0.491586721836972

This is my data (only first lines, the rest in the next post because I don't have enough space here: 0.0632982781382 -0.569157114194 -0.129765189076 -0.194686624845 -0.516281926505

-0.883270635395 0.180452023134 -0.921334457883 -0.258707651224 -0.473206499956

0.718228563081 -0.169593741903 0.987220624535 0.0624486954033 0.766251764391

0.282643432408 0.872951704521 -0.235557938815 0.867250924102 -0.535522195541

-0.0877454763675 -0.35784679215 -0.414806807575 -0.0812098352534 -0.799651832207

0.836514769115 -0.230601730489 0.833864577999 0.239834964603 0.386717042971

-0.317541553371 -0.741661654345 -0.0387495668828 -0.75518255635 -0.115082358293

0.810979673227 0.360364975169 0.475606416637 0.751593874791 0.0367799953375

0.657670145552 -0.315670507667 0.568206080343 0.172896499775 0.150887282216

0.586153583265 0.865216391657 0.132003919936 0.991158960049 -0.155487776764

-0.688842266651 -0.328576255317 -0.35784679215 -0.691026817365 -0.0112266725983

0.995090537427 0.153799685648 0.846680713868 0.571002558315 0.239306244848

0.186471172995 0.182302754085 -0.0961086569751 0.391132101523 -0.135163420761

-0.960078120402 -0.13917054687 -0.848160721231 -0.516020258648 -0.237505700052

0.219642461888 0.627493831058 -0.328576255317 0.698437412546 -0.773395446322

0.897229698468 -0.130548610948 0.97364279053 0.263738436556 0.535189509842

-0.0230343955533 -0.714588018822 -0.0637225025945 -0.43932791943 -0.49814223806

-0.0259541951863 0.568206080343 0.109100742136 0.293489038662 0.61491854231

0.650465124246 0.0041854367383 0.671098633938 0.207345879047 0.160401662607

-0.163464119 -0.115890401056 -0.466980257919 0.0630640071676 -0.531219437279

-0.162050660268 0.846680713868 -0.261488221612 0.46173281698 0.0374529420466

0.222690853498 -0.848160721231 0.399655930592 -0.471012571258 0.153534214869

-0.119517208614 0.399655930592 -0.169593741903 0.277517694439 0.25360605324

0.318529546845 -0.147912943699 0.440523748965 -0.0831517573087 0.0827840049138

-0.54977315263 -0.129765189076 -0.230601730489 -0.407367554856 0.474978997244

-0.0369757921773 -0.529740169069 0.0911239443478 -0.454133087607 -0.225011760048

0.957647482519 0.377362808105 0.635785314932 0.78927386066 0.0477203708788

0.0860770224172 -0.400562455567 -0.13917054687 -0.13103615503 -0.703989402442

0.177405523868 -0.133705190922 -0.102451031598 0.189730212983 -0.306628887724

-0.623816741101 -0.842019775969 -0.133705190922 -0.997977827124 0.260229327366

0.133070211866 0.289581120465 -0.377569477954 0.440066461318 -0.906995304144

0.0589340363955 0.833864577999 0.0041854367383 0.563204236565 0.357573860682

0.00412358334313 -0.129927941822 0.261603275193 -0.269389540765 0.199026693548

0.431105232842 0.812786177648 0.0846974933534 0.867194143715 0.0362220993767

-0.876784783943 0.0846349927372 -0.988923768539 -0.252691435368 -0.571545699485

-0.534877964307 0.342825671558 -0.400562455567 -0.132069079098 -0.00970450907922

0.338350013187 -0.757273352914 0.645641591611 -0.429507931478 0.509236446905

0.785825738045 0.526692415731 0.342825671558 0.828522030872 -0.326778858653

-0.493098357586 0.0673728740881 -0.563698756928 -0.0673489079317 -0.121737153277

-0.659404202628 0.0911239443478 -0.948105316038 -0.0639284452668 -0.772953882826

This is the error message: For input (0.0532936433319 -0.0868870306759 -0.357754765261 0.13965098377) your result -0.376174304131 is too far from expected -0.90763608556

TobiSonne     2020-11-15 12:56:31

The rest of my data:

-0.0905357439002 -0.102451031598 0.377362808105 -0.291882375258 0.924925143091

0.202836736437 -0.0961086569751 0.671157131844 -0.186850388375 0.997985589244

-0.784197197944 -0.0637225025945 -0.757273352914 -0.331526585449 -0.194985711682

0.418626149769 -0.563698756928 0.812786177648 -0.328339444025 0.784296926216

0.888823446617 0.109100742136 0.817526487863 0.435509989533 0.221802689624

-0.309196796252 0.200674171066 -0.130548610948 -0.036861583521 0.490216144541

0.184558308282 0.645641591611 -0.0206225526987 0.615569955268 0.178602308707

-0.812533932128 -0.61293050216 -0.355177714141 -0.910340051116 0.25935013597

-0.759927984305 0.277007580977 -0.714588018822 -0.210073883542 -0.278092846922

0.808216366703 0.671157131844 0.350115124395 0.956905713695 -0.190194443449

-0.25154294681 0.350115124395 -0.0290009028257 -0.0545769503417 0.292089740315

-0.296347160293 -0.948105316038 0.182302754085 -0.906538786904 0.390990902789

-0.00422357247067 0.132003919936 0.289581120465 -0.123243410822 0.423265280877

-0.730553589254 -0.414806807575 -0.315670507667 -0.714353923921 0.388057376756

0.294388703025 0.987220624535 -0.129927941822 0.911886267774 -0.237861715636

0.142430024064 0.219817659709 0.526692415731 0.0019764469195 0.900184016177

-0.154506314514 0.218387237142 0.153799685648 -0.0417993086498 0.762755093332

-0.0807486716712 -0.921334457883 0.0673728740881 -0.660871605433 -0.19829268983

0.0183476079902 0.475606416637 0.280907461099 0.146103551566 0.66987575183

-0.282863152396 0.817526487863 -0.529740169069 0.459440545386 -0.338209880173

-0.901701551284 -0.377569477954 -0.569157114194 -0.785284067767 -0.0488384991683

0.434109244582 -0.307501685304 0.865216391657 -0.199571215889 0.937020811134

0.434145195071 -0.0329029700581 0.180452023134 0.226548844992 -0.493031645417

0.0884706995666 0.97364279053 -0.147912943699 0.7293497125 0.0263382601021

-0.411222159592 0.170746997678 -0.842019775969 0.13554367732 -0.960097454868

-0.133866301545 0.233632318994 -0.307501685304 0.238135994526 -0.0472019622849

-0.521493239554 -0.854942389926 -0.115890401056 -0.946956460892 0.0551960546817

0.539995586686 0.285675434391 0.219817659709 0.607484808773 -0.0271677413429

0.571392462558 -0.0206225526987 0.872951704521 0.0521167686967 0.749826951222

-0.456684256102 -0.0387495668828 -0.764651313829 -0.0265274560037 -0.699306365366

0.24911815589 -0.674089845065 0.200674171066 -0.240276559033 -0.16381251891

-0.976941822741 -0.357754765261 -0.674089845065 -0.761988596556 -0.0346764592082

0.650341721747 0.280907461099 0.277007580977 0.572132194262 -0.438813583407

0.397406591805 -0.355177714141 0.218387237142 0.0914148236112 -0.139987209568

-0.153825759644 0.635785314932 -0.0329029700581 0.2260343276 0.345247306363

TobiSonne     2020-11-15 12:57:47

And these are my results. The first column is my best solution, the second is the real data (fifth column multiplied by S, here S = 0.5)

[1,] -0.266135363 -0.258140963

[2,] -0.232954466 -0.236603250

[3,] 0.374567974 0.383125882

[4,] -0.255065916 -0.267761098

[5,] -0.400822072 -0.399825916

[6,] 0.189874201 0.193358521

[7,] -0.056323805 -0.057541179

[8,] 0.013404145 0.018389998

[9,] 0.071009490 0.075443641 [ 10,] -0.074269386 -0.077743888

[11,] 0.001151521 -0.005613336

[12,] 0.123429066 0.119653122

[13,] -0.079045487 -0.067581710

[14,] -0.125763209 -0.118752850

[15,] -0.371157147 -0.386697723

[16,] 0.264198646 0.267594755

[17,] -0.250405098 -0.249071119

[18,] 0.304862199 0.307459271

[19,] 0.094751064 0.080200831

[20,] -0.276723835 -0.265609719

[21,] 0.027169174 0.018726471

[22,] 0.068214543 0.076767107

[23,] 0.117377404 0.126803027

[24,] 0.055543045 0.041392002

[25,] 0.222252895 0.237489499

[26,] -0.106239415 -0.112505880

[27,] 0.024059675 0.023860185

[28,] -0.345366552 -0.351994701

[29,] -0.165602067 -0.153314444

[30,] 0.124875325 0.130114664

[31,] -0.437463881 -0.453497652

[32,] 0.183501801 0.178786930

[33,] 0.112307877 0.099513347

[34,] 0.018628113 0.018111050

[35,] -0.283574046 -0.285772850

[36,] 0.001360233 -0.004852255

[37,] 0.243779235 0.254618223

[38,] -0.153854956 -0.163389429

[39,] -0.075825354 -0.060868577

[40,] -0.380717909 -0.386476941

[41,] 0.447967347 0.462462572

[42,] 0.484527093 0.498992795

[43,] -0.110373816 -0.097492856

[44,] 0.376685369 0.392148463

[45,] 0.121113265 0.110901345

[46,] 0.232612269 0.245108072

[47,] 0.085839515 0.089301154

[48,] 0.123443706 0.129675068

[49,] -0.135476840 -0.139046423

[50,] -0.092030596 -0.095097222

[51,] 0.156567190 0.146044870

[52,] 0.184265294 0.195495451

[53,] 0.224217526 0.211632640

[54,] 0.181566965 0.194028688

[55,] -0.109078070 -0.118930858

[56,] 0.445659088 0.450092008

[57,] 0.371149243 0.381377547

[58,] -0.106454077 -0.099146345

[59,] 0.340309371 0.334937876

[60,] -0.159859381 -0.169104940

[61,] -0.021696189 -0.024419250

[62,] 0.451724074 0.468510406

[63,] -0.231981074 -0.246515823

[64,] 0.022320678 0.013169130

[65,] -0.465512266 -0.480048727

[66,] -0.036085771 -0.023600981

[67,] 0.025618131 0.027598027

[68,] -0.021916063 -0.013583871

[69,] 0.373392363 0.374913476

[70,] -0.353008900 -0.349653183

[71,] -0.089947373 -0.081906259

[72,] -0.021081026 -0.017338230

[73,] -0.204914269 -0.219406792

[74,] -0.078510320 -0.069993605

[75,] 0.181347608 0.172623653

Alexandr Milovantsev     2020-11-18 13:52:49

https://yadi.sk/d/3tBM71y80jNXhQ

Here I posted archive, containing files with my checking results. File nets.txt contains trained neural networks, which are run on data set. The first is Tobi's one, the second is mine. Result0.txt contains calculation for Tobi's NN, Result1.txt for mine. 1.txt is input data set. As you can see on the last line of result0.txt, my reconstruction of NN from the bare string correspond to the checkers one. And results for the data set inputs are different from Tobi's posted here calculations. So, I think that TobiSonne just incorrectly forms string describing the trained NN.

TobiSonne     2020-11-22 15:07:57

Thank you so much!

Could you help me one last time, please?

Unfortunately I still can't see where my error is, I've tried so much, I'm really frustrated and this does not happen very often :(

This is your answer: 5 0.339051 0.706994 0.193776 0.929425 -0.545379 -0.279638 -0.239434 -1.07013 0.405265 -0.489543 0.62156 -1.01942 0.748245 0.574784 -0.580121 0.973039 -0.82167 -0.670637 -0.228163 0.535137 0.503478 0.0970107 0.129658 0.168595 0.174772 2.43649

So K = 5 and S = 0.339051?

I'm working with matrizes and matrix multiplication in my code.

I calculate this:

Y.exp <- Fa((Fa(Input %*% Weights.Inner)) %*% Weights.Output)/S

%*% is the matrix product in R.

So this would be your inner weights (called Weights.Inner in my code)?

0.706994 -0.279638 -0.489543 0.574784 -0.670637

0.193776 -0.239434 0.621560 -0.580121 -0.228163

0.929425 -1.070130 -1.019420 0.973039 0.535137

-0.545379 0.405265 0.748245 -0.821670 0.503478

And this your output weights (called Weights.Output in my code)?

0.0970107

0.1296580

0.1685950

0.1747720

2.4364900

My input are the first columns of the test data (not multiplied by anything).

These are the first ten cases that I have, the first column is the result of Y.exp and the second is 'original*scale' like in result1.txt.

-0.51611944 -0.17504590

-0.47256390 -0.16044114

0.76675775 0.25979843

-0.54135574 -0.18156934

-0.79934959 -0.27112275

0.38532724 0.13111680

-0.11974641 -0.03901879

0.04075319 0.01247029

0.15125718 0.05115848

-0.15598478 -0.05271829

Alexandr Milovantsev     2020-11-25 14:30:38

Your approach using matrices looks like it should do the calculation the right way, but I can not figure out why it didn't. So I here post files with detailed calculation explanation. You can just follow it with a pen and handheld calculator. I hope, that this files will be useful for you.

https://yadi.sk/d/N7WkdVNwWedN_A

Alexandr Milovantsev     2020-11-25 14:49:44

I thing it worth to also print out results of Input %*% Weights.Inner and Fa(Input %*% Weights.Inner), and than compare it to the last Fa(x)=y column of the explanation files.

If there will be difference on x part, than problem is in the matrices. May be wrong transposition.

If there will be difference on y part, than problem is in Fa function itself.

TobiSonne     2020-11-25 16:10:44

Thank you SO, SO much!

I had two stupid mistakes: I really didn't formulate the string correctly (had a wrong order) and I compared the backtransformed results with transformed outputs.

I'm so happy now :))

Alexandr Milovantsev     2020-11-26 03:56:11

Glad to hear that my efforts to help were not in vain! Good luck solving other tasks!

Please login and solve 5 problems to be able to post at forum