Bought myself a Anycubic i3 Mega-S. Not sure why, probably ramping up to a midlife crisis or something.
Downloaded the Linux version of Cura 4.2.1 from Ultimaker.
Printed a few things from thingiverse. Learned enough Freecad to design simple brackets, screws and clamps.
Spent the last few days trying to make Cura see the printer from Linux. It turns out that the solution was very simple, at least on Linux Mint 19.1: all I had to do was to add my user to the "dialout" group and then restart the X server and log back in to Mate.
Monday, September 30, 2019
Wednesday, July 10, 2019
zug.tap updated with example about how to run dlang TAP tests with Perl's prove
What is "prove": prove on perldoc.perl.org
zug.tap is my implementation of a TAP producer in dlang zug.tap in the official Dub repo
contents of .proverc:
all options in the command line:
zug.tap is my implementation of a TAP producer in dlang zug.tap in the official Dub repo
contents of .proverc:
-e '/usr/bin/rdmd -I./source/ -I../../source/'
--ext '.d'
all options in the command line:
prove -e '/usr/bin/rdmd -I./source/ -I../../source/' --ext '.d' -v
emilper@home ~/work/zug_project_dlang/zug-tap/examples/run_with_Perl5_prove $ prove
t/t0001_tests_pass.d ................. ok
t/t0002_tests_fail.d ................. Failed 5/5 subtests
t/t0003_tests_some_fail_some_pass.d .. Failed 2/5 subtests
Test Summary Report
-------------------
t/t0002_tests_fail.d (Wstat: 0 Tests: 5 Failed: 5)
Failed tests: 1-5
t/t0003_tests_some_fail_some_pass.d (Wstat: 0 Tests: 5 Failed: 2)
Failed tests: 4-5
Files=3, Tests=15, 0 wallclock secs ( 0.02 usr 0.00 sys + 0.02 cusr 0.00 csys = 0.04 CPU)
Result: FAIL
emilper@home ~/work/zug_project_dlang/zug-tap/examples/run_with_Perl5_prove $ prove -v
t/t0001_tests_pass.d .................
1..5
ok 1 should pass 1
ok 2 should pass 2
ok 3 should pass 3
ok 4 should pass 4
ok 5 should pass 5
ok
t/t0002_tests_fail.d .................
1..5
not ok 1 should fail 1
not ok 2 should fail 2
not ok 3 should fail 3
not ok 4 should fail 4
not ok 5 should fail 5
Failed 5/5 subtests
t/t0003_tests_some_fail_some_pass.d ..
1..5
ok 1 should pass 1
ok 2 should pass 2
ok 3 should pass 3
not ok 4 should fail 4
not ok 5 should fail 5
Failed 2/5 subtests
Test Summary Report
-------------------
t/t0002_tests_fail.d (Wstat: 0 Tests: 5 Failed: 5)
Failed tests: 1-5
t/t0003_tests_some_fail_some_pass.d (Wstat: 0 Tests: 5 Failed: 2)
Failed tests: 4-5
Files=3, Tests=15, 0 wallclock secs ( 0.02 usr 0.01 sys + 0.01 cusr 0.01 csys = 0.05 CPU)
Result: FAIL
Friday, April 19, 2019
Monday, March 11, 2019
remote backup script via ssh, new version
#!/bin/bash -x SITE=$1 if [ -z "$SITE" ] then echo "no site supplied" exit fi echo SITE IS $SITE ACCOUNT=root@$SITE COMMAND="/bin/tar czf - /home " BACKUP_FOLDER=~/backup DATE=`date +%Y-%m-%d_%H-%M-%S` echo DATE IS $DATE DESTINATION=$BACKUP_FOLDER/$SITE/home_$DATE.tar.gz mkdir -p $BACKUP_FOLDER/$SITE/ /usr/bin/ssh $ACCOUNT $COMMAND > $DESTINATION echo DONE
Saturday, January 12, 2019
vector operations in D
Up to a few years ago I was of the breed that spawns code fated to wait for requests for most of it's life so really bothering with optimizing it is not cost effective. Then I stumbled into the one million to one situations which warranted a lecture about how many of my monthly wages a server upgrade is worth. It was revealed to me that adding more memory to that particular server was worth what was spent on me over more than 6 months, and as a result I had suddenly become interested in optimization. The story ended with that particular call being upgraded from finishing in more than 4 hours to finishing in less than 5 minutes and in me starting to have real doubts about the meme regarding the cost ratio hardware/developer time.
This is the context. The meat of the issue is Dlang has a very easy to use way to let the compiler decide how operations on arrays should be done based on what is available to it at compile time: Array Operations . Testing the performance gains was done according to Other Dev tools: Valgrind Helper . Screenshots are of KCachegrind.
The code:
This is the context. The meat of the issue is Dlang has a very easy to use way to let the compiler decide how operations on arrays should be done based on what is available to it at compile time: Array Operations . Testing the performance gains was done according to Other Dev tools: Valgrind Helper . Screenshots are of KCachegrind.
The code:
module tests.vector_operations; void main(){ test_loop(); test_vector(); } void test_vector() { int[1000000] test; int[] result; result[] = test[] + 1; } void test_loop() { int[1000000] test; int[1000000] result; for (size_t i = 0; i < test.length; i++) { result[i] = test[i] + 1; } }Compiling, running valgrind, demangling
$ dmd -g vector_operations.d $ valgrind --tool=callgrind --dump-instr=yes --collect-jumps=yes --callgrind-out-file=callgrind_out ./vector_operations $ ddemangle callgrind_out > callgrind_out.demangledInspecting the results in KCachegrind:
Wednesday, January 9, 2019
porting old code from Note.js to D v2
Since about august I started porting code from Node.js to Dlang. Node.js is nice enough but the ecosystem has a Jurassic-going-on-Cretacic flavour to it.
I guess I no longer start drooling when I see templates and I don't swoon when having to deal with strong and strict typing, though templates help a lot and without them I'd have very probably given up and returned to Perl 5.
Here is the code: https://bitbucket.org/emilper/zug-matrix . I started with the naive algorithms from the original code and I am moving away to more formal ways of linear algebra, and already know more about it than I ever did. Again, dlang templates are worth *my* weight in gold :) , reusing code between more symbolic matrices and numeric matrices is worth the efort learning about dlang templates.
I guess I no longer start drooling when I see templates and I don't swoon when having to deal with strong and strict typing, though templates help a lot and without them I'd have very probably given up and returned to Perl 5.
Here is the code: https://bitbucket.org/emilper/zug-matrix . I started with the naive algorithms from the original code and I am moving away to more formal ways of linear algebra, and already know more about it than I ever did. Again, dlang templates are worth *my* weight in gold :) , reusing code between more symbolic matrices and numeric matrices is worth the efort learning about dlang templates.
Subscribe to:
Posts (Atom)