# Planet Primates

## October 26, 2014

### Planet Clojure

#### A guide to setup test driven workflow for Clojure

Just a heads up, while this post assumes using Emacs, the takeaway should be the workflow which is fairly portable to other editors.

When developers practising test-driven development1 in other languages first move to Clojure, one thing that becomes painfully apparent is the slow startup time of a Clojure process. While JVM is partly to blame for this, it is Clojure itself that takes a significant amount of time to boot up.2

Clojure community is working on reducing the startup time, however it still is noticeably slow especially if one is coming from a dynamic languages such as Ruby, Python, JavaScript etc. This post presents a workflow where one can have an almost instantaneous feedback on running tests, from right within the editor without reloading the clojure process over and over.

## Intended audience

What follows is my take on what a productive TDD workflow could be in Clojure. This article expects the audience to be well versed in their respective editors and have launched/played with a Clojure REPL before. It also gets smoother to follow if one has already used Stuart Sierra’s reloaded3 workflow. While the article will make use Emacs as an editor, the workflow is what matters and should be fairly portable with a configurable editor.

Please take a note that your preferred editor should be able to talk to a running clojure process via nREPL as well be able to display the results from it if you want similar results.

## Preparing the Clojure project

First things first, the Clojure project has to be prepared. The first step is to use an nREPL middleware for Clojure. Afterwards, we will make use of reloaded workflow.

### Creating a new reloadable project

There is an existing leiningen template to generate new projects based on this workflow. If you are starting from scratch, all you really need to execute is a single lein command.

lein new reloaded com.myname/awesome-project

### Transforming into a reloadable project

This is probably the trickiest part, especially if you already have an existing Clojure project that you would want to restructure. If you are starting from scratch, you can skip this section.

This is entirely based on Stuart’s reloaded workflow. Since this is major workhorse of the whole testing without restarting approach, it would be recommended to read the well detailed post describing the workflow before continuing. Behind the scenes, the workflow makes use of tools.namespace4

The main idea of this workflow is to create a namespace which provides a system constructor. This constructor function should be able to return a new instance of the whole application/system5.

Some more functions are present in the namespace which manage the starting and stopping of the application, convienently called start and stop. Finally, the workflow makes use of the fact that Clojure will automatically load file named user.clj when starting a REPL in the given environment (because Clojure REPL starts by default in the user namespace, but this is configurable.)

A user.clj file is added to the dev folder, which provides some more functions that initialize, start and stop system. Convienently, (go) and (reset) functions are provided that wrap around all the other ones. They respectively start and reset(reload all the changed/new files etc.) the process properly to have a clean REPL to work on.

Since, this depends on the complexity and design of each individual project, it is recommended to follow the above mentioned post to properly integrate in one’s already existing application.

### Preparing for nREPL

To enable/enhance the Clojure project to allow clients to talk it via nREPL we will add a plugin called cider-nrepl6 to the project.clj.

CIDER is an Emacs package and cider-nrepl as a clojure project plugin enhances it’s capabilities, but the plugin is well consumed by other editors too. (Eg. vim-fireplace7 makes use of cider-nrepl whenever available.)

Add the plugin to the project.clj.

:plugins [...
[cider/cider-nrepl "0.7.0"]
...]

## Trying it out

Now that is project has been set up, let’s make sure it’s working normally. As expected, we would start a normal REPL, and test that the (go) and (reset) are working properly.

After you first launch the REPL via leiningen,

lein repl

run the functions provided by the reloaded workflow

user => (go)
user=> (reset)
:reloading (.... a list of all your files in the project ...)

Running (reset) should reload all your files in the project and give you a clean slate to work with in the running process. This is the magic that the test workflow further ahead makes use of.

## Tuning the editor

The following section will have instructions for Emacs, but it should be applicable to any fairly configurable editor.

### Talking over nREPL

As mentioned earlier, CIDER is the package used by Emacs to talk to the clojure project via nREPL. Install the package into Emacs the way you prefer it.

Please make sure that the cider-nrepl plugin for the clojure project and the cider package for Emacs are compatible with each other. This can be checked in the respective projects pages. As of the writing, the release version numbers are synchronised between the projects.

### Executing tests on the nREPL

After installing the package/plugin, ensure that it’s loaded and open the clojure project that you want to work on. Load up the test file and fire off a REPL via editor.

After connecting to the REPL one can execute tests directly on it, change the files in the editor, reload them in the REPL using (reset) and rerun the tests.

This can be done in Emacs as follows.

;; Fire off cider to connect to a REPL.
;; This will take more than just a few seconds.
M-x cider-jack-in<RET>
;; Also C-c M-j as provided by cider-nrepl

Once the REPL is ready, initialize the system and interact with it.

user => (go)
;; Now one can use test runner functions as provided by testing
libraries
;; for example clojure.test tests can be run as follows
user => (clojure.test/run-all-tests)
;; after this one can change the files that they are working on and
reset the REPL
user=> (reset)
;; now run the tests again / etc.

However, the above mentioned method is a poor wo/man’s way of running tests in the REPL. One can directly make use of the CIDER functionality to run/re-run all/selective tests.

This can be done in Emacs as follows.

;; Make sure the REPL is running and the project has started (go)
;; Open the test file you want to work on and execute the command
M-x cider-test-run-tests<RET>
;; also C-c ,
;; the above will run all the tests in the file and show results
;; either in the buffer (when failed) or in the echo line (if passed)

;; one can also selectively run tests
;; place the cursor on the test you want to run and execute
M-x cider-test-run-test<RET>
;; also C-c M-,

While executing the tests has gotten a bit faster, the problem still remains that the REPL has to be reloaded everytime something is changed in the code. The final section ahead will deal with this.

## Lightning quick re-runs

omg, i haz an elisp :) ...... basically !

Let’s write some elisp to help us have the whole reload-and-run-test flow just a keypress away. Add the following to your relevant emacs script files.

This example only provides a keypress to reload all the tests in the namespace but you should be able to get the idea and extend it.

(defun cider-repl-command (cmd)
"Execute commands on the cider repl"
(cider-switch-to-repl-buffer)
(goto-char (point-max))
(insert cmd)
(cider-repl-return)
(cider-switch-to-last-clojure-buffer))

(defun cider-repl-reset ()
(interactive)
(save-some-buffers)
(cider-repl-command "(user/reset)"))

(defun cider-reset-test-run-tests ()
(interactive)
(cider-repl-reset)
(cider-test-run-tests))

(define-key cider-mode-map (kbd "C-c r") 'cider-repl-reset)
(define-key cider-mode-map (kbd "C-c .") 'cider-reset-test-run-tests)

Now, you should be able to keypress C-c . to reset and run all tests via cider as well just be able to reset the REPL separately via C-c r.

## Conclusion

I wrote this guide, because I couldn’t find just one source of information for getting to a similar workflow presented above. I have pulled in ideas from a lot of other sources to be able to come up with this.

I hope this comes of some use to you as well. Feel free to pour in your suggestions and thoughts regarding improvement/corrections below.

Update: I realised a bit late that I missed the published date by a whole month, but I'll let the permalinks stay for now and not break any links directed here.

### Footnotes

1. Test-driven development here meaning, in the wide sense all styles of instant feedback testing when it comes to dynamic languages without a preference for test-first, test-driven, etc. The point, is being able to execute a test one is writing, without switching the ongoing context too much(eg. without leaving the editor).

2. Stuart Sierra has explained his reloaded workflow in details here. He has also created a leiningen template to create new reloaded projects

3. According to the tools.namespace github page, it includes tools for managing namespaces(generating a dependency graph / reloading etc.) in clojure.

4. Called system, because it represents the whole system or the application that one is working on.

5. cider-nrepl is a collection of nREPL middleware designed to enhance CIDER, as explained here.

6. vim-fireplace is a Vim plugin for clojure that provides a repl within the editor.

### CompsciOverflow

#### A question about the proof of the lower bound of compares in comparison based sorting

I'm reading Sedgewick and Wayne's book of Algorithm. When I read the following proof in the attached picture, I don't understand why it assumed the comparison number is lg(number of leaves). Any help is appreciated!

### StackOverflow

#### How to define a multiple composition function?

Is there a way to define a Haskell function that takes a (some kind of collection) of functions and produces a single function: their composition from right to left?

I tried:

foldr ($) but this only accepts a list of functions whose result has the same type as that of their argument: foldr ($) :: a -> [a -> a] -> a

So, I can do:

foldr ($) 5 [(+) 1, (*) 2] and this produces the correct result 11 However trying to evaluate this: foldr ($) "hello" [(+) 1, length]

produces the error below:

ERROR - Type error in list

*** Expression : [(1 +),length] *** Term : length *** Type : [a] -> Int *** Does not match : Int -> Int

### QuantOverflow

#### Build a customizable trading engine in python

I am planning building fully customizable backtesting trading engine in python from scratch as a open source project, the main features i am considering is,

• It should be fully customizable from top to bottom
• Customization is very easy and anyone can customize with a basic knowledge in python
• It have a in built template engine for reports which is also customizable
• Anyone can customize it as per their trading style

So what are the basic things which i have to consider for building a trading engine? Which are the python modules which is useful for this project So anyone know any material regarding this sharing a link will be very helpful....

### Fefe

#### Nachdem wir hier gestern ausgewählte Menschenverachtung ...

Nachdem wir hier gestern ausgewählte Menschenverachtung von den Rieselfeldern der einen Seite gesehen haben, gibt es heute ein Best-Of der Menschenverachtung der anderen Seite: Kurzweiliger Vortrag von Anita Sarkeesian darüber, wie 4chan und co sie zu ärgern versuchen. Das ist selbstredend ebenfalls völlig unakzeptabel, was dieser Frau für ein Hass entgegenschlägt. Aber ich finde, dass man Unrecht der einen Seite nicht gegen Unrecht der anderen Seite aufwiegen und anrechnen kann.

Beide Seiten verhalten sich offensichtlich unakzeptabel. Ich bin bisher aber hauptsächlich dem Hass der Feminismus-Seite begegnet, daher betrifft der mich in meiner persönlichen Wahrnehmung der Welt mehr.

Schlimmer noch als das ursprüngliche Scheiß-Verhalten finde ich, wenn dann Passanten angeschossen werden, weil sie sich inhaltlich nicht schnell oder deutlich genug von irgendwas distanziert haben, mit dem sie nicht nur nichts zu tun hatten, sondern dessen sie möglicherweise gar nicht gewahr waren. Wer Feministen kritisiert, ist unabhängig von den aufgeworfenen inhaltlichen Fragestellungen automatisch selber ein frauenfeindlicher Steinzeit-Neanderthaler. Die niedrige Reizschwelle und Vehemenz des gerechten Zorns habe ich so sonst bisher nur bei Antisemitismus-Vorwürfen gesehen. Und ich bin mir fast sicher, dass das in dieser Diskussion auch anders herum so abläuft, habe das aber noch nicht persönlich beobachten können.

### /r/compsci

#### Hearing impaired in academia/industry frustrated with lack of captioning in conferences

I'm wondering if there are any members of hearing impaired community who are working in CS/maths and related fields and are frustrated with lack of captioning in technical and academia conferences. How do you deal with this problem ? I would love to know your thoughts.

submitted by buggycerebral

### StackOverflow

#### Dealing Options from a Map.get in Scala

I've a Scala Map that contains a bunch of parameters that I get in a HTTP request.

val queryParams = Map( ("keyword" -> "k"), ("from" -> "f"), ("to" -> "t"), ("limit" -> "l") )


I've a method that takes all these parameters.

def someMethod( keyword: String, from: String, to: String, limit: String) = { //do something with input params }


I want to pass the parameters from the map into my method someMethod.

queryParams.get returns an Option. So I can call something like queryParams.get("keyword").getOrElse("") for each input parameter.

someMethod( queryParams.get("keyword").getOrElse(""), queryParams.get("from").getOrElse(""), queryParams.get("to").getOrElse(""), queryParams.get("limit").getOrElse(""))


Is there a better way ?

### /r/clojure

#### How do I merge a lazyseq of sets?

I'm working on a problem that involves a graph. I've chosen to represent the graph as a map, where the keys are values, mapped to a set of its neighbors.

So, for example, a simple map would be {:1 #{:3 :2}, :2 #{:3}, :3 #{:1 :2}}

When i want to know the neighbors of a set of points I do the following:(map #(get graph %) neighbors)

Which returns this:

(#{:88 :27 :77 :75 :73 :13 :38 :57 :85 :35 :81 :25 :32 :5} #{:88 :65 :52 :12 :56 :70 :13 :66 :53 :43 :57 :26 :79 :44 :55 :8 :62 :74 :20 :94 :90 :6 :54 :68} #{:88 :45 :31 :65 :40 :12 :75 :13 :0 :58 :38 :44 :35 :78 :20 :86 :17 :46 :28 :15 :33} (...)"

That is, a lazyseq of sets. How do I merge these sets?

I've tried to create an empty set (neinei) and do the following:

(map #(def neinei (merge-with clojure.set/union % neinei)) (map #(get graph %) neighbors))

But somehow this fails... Any insight?

submitted by false_god

### CompsciOverflow

#### Are if statements unnecessary if a program is represent as an explicit state machine?

This question occurred to me some time ago when i was thinking about whether or not if statements are fundamental in computation.

Consider a program that manages a single bank account (for the sake of simplicity). The bank account could be defined as something like

Account { int balance; // current amount of Money

boolean withdraw(int n) { if (balance >= n) { balance = balance -n; return true; } else return false; }

void deposit(int n) { amount = amount + n; } }

Since the program has no way to known in which state it currently is unless it performs validations using if statements, as in the withdraw operation, if statements are unavoidable.

However, over the course of time, the program will pass through a finite set of states that can be known beforehand. In this particular case, a state is defined solely by the value of the balance variable, hence we would have states: {balance = 0 , balance = 1, balance = 2...}.

If we assign each state a number, say state {0,1,2,....} with a 1-1 correspondence to the above set os states, and assign to each operation a number identifier as well (say deposit = 0 and withdraw = 1), we could model the program as an explicit transition between states and therefore remove every if statement from the code.

Consider that state = 0 is the state where balance = 0 and we want to perform a deposit of 50 dollars, if we coded every single possible execution of the deposit function, we could just define the deposit function as

void deposit (int n) { deposit[state][n]; // índex deposit instance for state = 0, amount = n; }

deposit[][] would be a matrix of pointers for a set of functions that represent each possible execution of deposit, like

deposit[0][0] -> balance = balance + 0; state = 0; deposit[0][1] -> balance = balance + 1; state = 1; ....

in the case of withdrawal, it would be like:

boolean withdraw (int n) { return withdraw[state][n]; // índex withdraw instance for the current state and amount = n; }

withdraw[][] would be a matrix of pointers for a set of functions that represent each possible execution of withdraw, like:

deposit[0][100] -> return false; state = state; ... deposit[200][100] -> balance = balance - 100; return true; state = 100;

In this situation, the program the managers the single bank account can be written without using a single if statement!

As a consequence however, we have to use A LOT more memory, which may make the solution unreasonable. Also one may put the question of how did we fill the deposit[][] and withdraw[][] matrices? By hand? It somehow implies that a previous computation that used ifs as necessary to determine and possible states and transitions.

All in all, are ifs fundamental or does my exemple prove that they aren't? If they are, can u provide me an exemple where this does not work? If they are not, why dont we use solutions like these more often?

Is there some law of computation which states that if statements are unavoidable?

### StackOverflow

#### Can someone explain Clojure Transducers to me in Simple terms?

I have tried reading up on this but I still don't understand the value of them or what they replace. And do they make my code shorter, more understandable or what?

## Update

Alot of people posted answers, but it would be nice to see examples of with and without transducers for something very simple, which even an idiot like me can understand. Unless of course transducers need a certain high level of understanding, in which case I will never understand them :(

### Planet Emacsen

#### Got Emacs?: Update your package search url as melpa.milkbox.net is now melpa.org

It's been a couple of days but the url has changed to melpa.org. You may need to change your .emacs file  from ("melpa" . "http://melpa.milkbox.net/packages/") to ("melpa" . "http://melpa.org/packages/")

### /r/compsci

#### Introduction to Sparsity, Machine Learning/Image Processing Applications

I'm fairly familiar with the standard textbooks on Machine Learning (e.g. Bishop, MacKay, etc.). Recently, I've been reading more and more about "sparsity" and applications to ML, image processing, etc. So far, I'm finding this not-so-easy going.

Can anyone recommend a few papers/textbooks which could ease me into this field? Perhaps there exist a few introductory lectures online?

submitted by Eherrtele59955

### CompsciOverflow

#### How to find a local minimum of a complete binary tree?

How to find a local minimum of a complete binary tree?

Consider an $n$-node complete binary tree $T$, where $n = 2^d − 1$ for some $d$. Each node $v \in V(T)$ is labeled with a real number $x_v$. You may assume that the real numbers labeling the nodes are all distinct. A node $v \in V(T)$ is a local minimum if the label $x_v$ is less than the label $x_w$ for all nodes $w$ that are joined to $v$ by an edge.

You are given each a complete binary tree $T$, but the labeling is only specified in the following implicitly way: for each node $v$, you can determine the value $x_v$ by probing the node $v$. Show how to find a local minimum of $T$ using only $O(\log n)$ probes to the nodes of $T$.

Attribution: this seems to be Problem 6 in Chapter 5 "Divide and Conquer" from the book "Algorithm Design" by Jon Kleinberg and Eva Tardos.

#### Support Vector Machines as Neural Nets?

This is more of a conceptual question.

I have learned about Neural Nets, and I have some clue as to how Support Vector Machines work. I read somewhere however that given the appropriate kernel (is that right?), the SVM is identical to the Neural Net. Could someone who understands this please enlighten me as to how that's possible?

### /r/emacs

#### Disable copy on selection (Mac/24.4)

Whenever I make a selection, either through evil or the mouse, the selection gets automatically copied into the clipboard. I have set

(setq select-active-regions nil)

(setq mouse-drag-copy-region nil)

(setq x-select-enable-primary nil)

but that doesn't seem to do anything. Any help? Please?

submitted by pxpxy

### TheoryOverflow

#### EXPSPACE-complete problems

I am currently trying to find EXPSPACE-complete problems (mainly to find inspiration for a reduction), and I am surprised by the small number of results coming up.

So far, I found these, and I have trouble expanding the list:

Do you know other contexts when EXPSPACE-completeness appears naturally?

### /r/compsci

#### Latency of a Parallel Fold/Reduce

A while ago, I posted here to ask about parallelizing a reduce/fold operation. The main point was that given N values and N processors, we can compute the minimum with O(lg(N)) latency. Of course, the total processor time is still O(N).

This rough analysis implicitly assumes that the communication delays do not scale with N. This doesn't seem like a realistic assumption. Can someone point me to a reference which studies the issue of communication delays in a parallelized reduce?

submitted by carmichael561

### CompsciOverflow

#### Comparing $2^{F_n}$ and $2^{\varphi^n}$

if we define $F_n$ be the $n$th fibonacci number and $\varphi$ be golden number then can we say that

$2^{F_n} \in \Theta(2^{\varphi^n})$

or in other word

$2^{\frac{\varphi^n - (-\varphi)^{-n}}{\sqrt{5}}} \in \Theta(2^{\varphi^n})$

It's simple to show that $F_n \in \Theta(\varphi^n)$ but about above one I don't have any Idea

### /r/emacs

#### Python3 Autocomplete emacs

Can anyone tell me how to get this ?? I have it on vim but i much prefer emacs. Thank you in advance

submitted by chopperkuncodes

### CompsciOverflow

#### How many elements are in a set containing $\{X_{12}, X_{23}, X_{34}, \dots, X_{(n-1)n}\}$

How many elements are in a set containing $\{X_{12}, X_{23}, X_{34}, \dots, X_{(n-1)n}\}$? I tried drawing a matrix but it didn't work.

### StackOverflow

#### Cartesian product in clojure

I'm trying to implement a method that will take a list of lists and return a the cartesian product of these lists.

Here's what I have so far:

(defn cart

([] '())
([l1] (map list l1))
([l1 l2]
(map
(fn f[x] (map
(fn g [y] (list x y))
l2))
l1)
)

)

(defn cartesian-product [& lists]
(reduce cart lists)

)

;test cases
(println (cartesian-product '(a b) '(c d))) ; ((a c) (a d) (b c) (b d))
(println (cartesian-product ())) ;()
(println (cartesian-product '(0 1)))    ; ((0) (1))
(println (cartesian-product '(0 1) '(0 1))) ; ((0 0) (0 1) (1 0) (1 1))
(println (apply cartesian-product (take 4 (repeat (range 2))))) ;((0 0 0 0) (0 0 0 1) (0 0 1 0) (0 0 1 1) (0 1 0 0) (0 1 0 1) (0 1 1 0) (0 1 1 1) (1 0 0 0) (1 0 0 1) (1 0 1 0) (1 0 1 1) (1 1 0 0) (1 1 0 1) (1 1 1 0) (1 1 1 1))


The problem is my solution is really 'brackety'.

(((a c) (a d)) ((b c) (b d)))
()
(0 1)
(((0 0) (0 1)) ((1 0) (1 1)))
(((((((0 0) (0 1)) 0) (((0 0) (0 1)) 1)) 0) (((((0 0) (0 1)) 0) (((0 0) (0 1)) 1)) 1)) ((((((1 0) (1 1)) 0) (((1 0) (1 1)) 1)) 0) (((((1 0) (1 1)) 0) (((1 0) (1 1)) 1)) 1)))


      (apply concat(reduce cart lists))


but then I get a crash like so:

((a c) (a d) (b c) (b d))
()
IllegalArgumentException Don't know how to create ISeq from: java.lang.Long clojure.lang.RT.seqFrom (RT.java:494)


So, I think I'm close but missing something. However since I'm so new to clojure and functional programming I could be on the completely wrong track. Please help! :)

### Overcoming Bias

#### The Rosy View Bias

How much does merit contribute to success? A rosy view is that success is mostly due to merit, while a dark view is that success is mostly not due to merit, but instead due to what we see as illicit factors, such as luck, looks, wit, wealth, race, gender, politics, etc.

Over a lifetime people gain data on the relation between success and merit. And one data point stands out most in their minds: the relation between their own success and merit. Since most people see themselves as being pretty meritorious, the sign of this data point depends mostly on their personal success. Successful people see a rosy view, that success and merit are strongly related. Unsuccessful people see a dark view, that success and merit are only weakly related.

In addition, successful people tend to know other successful people, and people tend to think their associates are also meritorious. So the other data points around people tend to confirm their own data point. The net result is that older people tend to have more data on the relation between merit and success, with successful people seeing a rosy view, and unsuccessful people seeing a darker view.

Since the distribution of success is quite skewed, most older people see a darker view. However, that dark majority doesn’t necessarily get heard much. Most of the people who are heard, such as reporters, authors, artists, professors, etc., see rosy views, as they tend to be both older and successful.

Also, most people prefer to look successful, and so they prefer to look like they’ve seen a rosy view. Even if they haven’t, at least yet. And a good way to look like you believe something is to actually believe it, even if your evidence doesn’t support it so much.

In sum, we expect the people we hear to be biased toward saying and believing a rosy view of the relation between success and merit. Of course that might be good for the world, if a realistic view would lead to too much envy and conflict. But it would still be a biased view.

Added 11p: Of course if they can find a way to rationalize it, we expect everyone to be inclined to favor a view where merit is a big cause of people reaching up to the success level where they are, but non-merit is a relatively bigger cause of people reaching the higher levels above them.

## October 25, 2014

### TheoryOverflow

#### Can Curry-Howard prove a theorem from the types in your program, that has nothing to do with your program?

Curry-Howard means that any type can be interpreted as a theorem in some logical system, and any term can be interpreted as a proof of its type. This does not mean that those theorems have anything to do with your program. Take the following function:

swap : forall a,b. (a,b) -> (b,a)
swap pair = (snd pair, fst pair)


The type here is forall a,b. (a,b) -> (b,a). The logical meaning of this type is (a and b) => (b and a). Note that this is a theorem in logic, not a theorem about your program.

My question is: Can Howard-Curry prove a theorem from the types in your program, that has nothing to do with your program?

#### Where is the proof of universality of Rule110 in Stephen Wolfram's book?

I have Stephen Wolfram's book A New Kind Of Science. And I want to find the proof of the universality of Rule 110. I couldn't find the clue in the contents page since it only shows 12 chapters and no details.

Could some one help me please? Does he show the proof in his book at all? What is the page number where he shows the proof?

I'm reading the paper by Matthew Cook for sure. It's just more pictures help me understand things better. I now know pretty well what is the tag system and the cyclic tag system, but still trying to find a way to visualize the equivalence between the tag system and the turing machine. Other resources about this are welcome, too!

Many thanks!

### StackOverflow

#### How to make a Clojure function take a variable number of parameters?

I'm learning Clojure and I'm trying to define a function that take a variable number of parameters (a variadic function) and sum them up (yep, just like the + procedure). However, I don´t know how to implement such function

Everything I can do is:

(defn sum [n1, n2] (+ n1 n2))

Of course this function takes two parameteres and two parameters only. Please teach me how to make it accept (and process) an undefined number of parameters.

### CompsciOverflow

#### How to make logical inference from simulated data

I have data collected from a computer simulation of football games which seem to have recurring patterns of the following form.

if madrid plays arsernal and the match ends under 3 goal, then on their next match against each others, madrid will win. if madrid happens to loose and then plays against chelsea next, they will win 90% of the time.

how do I find such inferences from simulation generated data like this. There are other forms of hidden patterns that I believe exists in the dataset.

### StackOverflow

#### Is there a way in F# to extract the value of a record member in a Clojure fashion?

In Clojure I would write the above code:

user=> (def points [{:x 11 :y 12} {:x 21 :y 22}])
#'user/points
user=> (map :x r)
(11 21)


I can do this because :x can be used as a function. This is a very useful feature

The same code in F# would look like this:

> type Point = {x : int; y: int};;
> let points = [{x=11;y=12}; {x=21; y=22}];;
> List.map (fun p -> p.x) points
val it : int list = [11; 21]


Because I hate writing the anonymous function all the time I find myself writing the type Point like this:

type Point =
{
x: int
y : int
}

static member getX p = p.x;;


...which gives me the possibility of doing:

> List.map Point.getX points


This is still messy because I need to write a getter for each record member that I use.

Instead what I would like is a syntax like this:

> List.map Point.x points


Is there a way to do this without having to write the messy anonymous function (fun p -> p.x) or the static getter?

UPDATE:

By the way, Haskell also does it the same as Clojure (actually is the other way around):

Prelude> data Point = Point { x :: Int, y :: Int}
Prelude> let p = Point { x = 11, y=22}
Prelude> x p
11


UPDATE 2:

Hopefully a more obvious reason against the lambda is an example when the type inference doesn't work without help:

type Point2D = { x : int; y : int}
type Point3D = { x : int; y : int; z : int}

let get2dXes = List.map (fun (p:Point2D) -> p.x)
let get2dXes' : Point2D list -> int list = List.map (fun p -> p.x)
let get2dXes'' (ps : Point2D list) = List.map (fun p -> p.x) ps


...which are way less elegant than something like:

let get2dXes = List.map Point2D.x


I don't want to start flame wars about which syntax is better. I was just sincerely hoping that there is some elegant way to do the above since I haven't found any myself.

Apparently all I can do is pray to the mighty gods of F# to include a feature like this in a future version, next to the type classes ;)

UPDATE 3:

This feature is already proposed for a future language version. https://fslang.uservoice.com/forums/245727-f-language/suggestions/5663326-syntax-for-turning-properties-into-functions Thanks JackP.!

### TheoryOverflow

#### Polynomial and rational function representation of remainder

Consider the remainder operation $R_q(x)=x \mod q$ where $x\in\Bbb Z \cap [0,q^2]$ and:

$1)$ $q$ is prime.

$2)$ $q$ is power of prime.

$3)$ $q$ is composite.

For each of the three cases, what is known about the minimal degree of real polynomial and real rational function (sum of degrees of numerator and denominator) representation of $R_q(x)$?

Even the case of $q=2^t>4$ is interesting for me.

[I am essentially asking for representation of union of segments of $q$ straight lines (each containing $q$ points) that are shifted from each other.]

### StackOverflow

#### How to implement an heterogeneous container in Scala

I need an heterogeneous, typesafe container to store unrelated type A, B, C.

Here is a kind of type-level specification :

trait Container {
putA(a: A)
putB(b: B)
putC(c: C)
put(o: Any) = { o match {
case a: A => putA(a)
case b: B => putB(b)
case c: C => putC(c)
}
getAllAs : Seq[A]
getAllBs : Seq[B]
getAllCs : Seq[C]
}


Which type is best suites to backed this container ?

Is it worth creating a Containerable[T] typeclass for types A, B, C ?

thks.

### QuantOverflow

#### Do people actually use VaR in professional settings?

VaR seems like such an obviously flawed metric, I am surprised that it seems to be used so much in the private sector.

First, the way it is named and the way it is presented often imply it is the expected value of loss, when in fact it is the upper bound on loss. It seems risk measures should be conservative, and presenting the upper bound, is the opposite of conservative.

More seriously, the fact that VaR is not coherent means that you can't use it to compare two portfolios, or even compare a changing portfolio over time. Doesn't this make value at risk completly useless?

I am just starting out in quant finance, so this is a presumptious question, and I am probably missing something. Thanks for any hlep.

### StackOverflow

#### How would you explain Scala's abstract class feature to a 6th grader?

I'm trying to understand this code example from O'Reilly's Programming Scala. I'm a JavaScript programmer and most of the explanations in the book assume a Java background. I'm looking for a simple, high-level explanation of abstract classes and what they're used for.

package shapes {
class Point(val x: Double, val y: Double) {
override def toString() = "Point(" + x + "," + y + ")"
}

abstract class Shape() {
def draw(): Unit
}

class Circle(val center: Point, val radius: Double) extends Shape {
def draw() = println("Circle.draw: " + this)
override def toString() = "Circle(" + center + "," + radius + ")"
}
}


### Dave Winer

#### Welcome to the new Scripting News

It took a long time to get here!

There are six tabs, and three menus, in the new interface.

#### The six tabs

1. Blog -- essays from Scripting News that I write in Fargo.

2. Photos -- from my Flickr feed.

4. Cards -- from Little Card Editor.

5. River -- from River4.

6. About -- which I wrote in the OPML Editor, for old times sake.

#### Why so many tabs?

I create lots of different kinds of content. I write, I take pictures, and make cards, and curate news. I select feeds to form a river.

All of it should come together in a place that represents me, online, as a person.

Scattering it all over the place is cool! As long as there's one place where it all comes together. That is this place, my blog, Scripting News.

They seem kind of self-explanatory. I expect to add more to the menus.

If you like software, and this is a blog about software, among other things, what's interesting is that there isn't actually any content in the home page. Three of the tabs are built from RSS feeds: Blog, Photos and Cards. The Links tab comes from a JSON file that Radio3 outputs, and the River tab uses the River.js file that River4 outputs, again in JSON. The About tab is in OPML, of course.

It's more of an application than a web page, but I guess that's the way we do things these days.

#### Should work well on phones, tablets

This part is getting pretty routine. Give it a try, let me know how it goes.

### StackOverflow

#### FlatmapValues on Map

Given a Seq of tuples like:

Seq(
("a",Set(1,2)),
("a",Set(2,3)),
("b",Set(4,6)),
("b",Set(5,6))
)


I would like to groupBy and then flatMap the values to obtain something like:

Map(
b -> Set(4, 6, 5),
a -> Set(1, 2, 3)
)


My first implementation is:

Seq(
("a" -> Set(1,2)),
("a" -> Set(2,3)),
("b" -> Set(4,6)),
("b" -> Set(5,6))
) groupBy (_._1) mapValues (_ map (_._2)) mapValues (_.flatten.toSet)


I was wondering if there was a more efficient and maybe simpler way to achieve that result.

#### Lightweight HTTP server in Scala or Java

I want a fast, multithreaded HTTP server in Scala (or in Java thats usable in Scala). I don't need any thing fancy: just give me the requested URL and headers, and let me give back the headers and body.

I've tried using spray-can, but it does too much validation and parsing - it rejects lots of urls or other things that, while perhaps bad choices for web apps, are perfectly fine in my application.

Can you recommend a good basic HTTP server that lets me code the response and gets out of the way?

UPDATE: I've tried Scalatra and Spray. The problem I'm having is that they have a certain model of requests and responses which is too high level. I want to get the HTTP request and sent out my response, however I like it. (It's a different usage than a typical Web application)

#### Extending the ZeroMQ Majordomo Pattern to have Multiple Brokers

I am attempting to extend the ZeroMQ Majordomo Pattern to operate with multiple brokers interconnected in a mesh network. It will operate identically to the original Majordomo Pattern except a broker B1 will forward a client request to broker B2 if B1 cannot service its client's request but B2 can.

I have two design questions about the implementation of this pattern.

1) How will broker B1 know what services are available on B2? One possible solution is to create a "hash node" that connects to and keeps track of services available by each broker in the mesh. And when a broker wants to forward a client request it consults the hash node. However this requires all brokers to regularly communicate with the hash node. Will this communication hinder performance or create any subtle issues I didn't mention?

2) What kind of sockets should brokers use to communicate in a mesh network? Each broker clearly needs a ROUTER socket to receive messages, connection request, etc.. But what socket type should each broker use to send requests? A DEALER socket is a poor choice because each broker will need to open a DEALER socket to every other broker. ROUTER would be nice but requires "hacks" to implement. Is there any other way to efficiently connect brokers together?

#### How to get rid of : class type required but T found

How to solve this compilation error :

trait Container {
def getInts() : Seq[Int]
def getStrings() : Seq[String]

def put[T](t: T)
def get[T] : Seq[T]
}

class MutableContainer extends Container {
val entities = new mutable.HashMap[Class[_], mutable.Set[Any]]() with mutable.MultiMap[Class[_], Any]

override def getStrings(): Seq[String] = entities.get(classOf[String]).map(_.toSeq).getOrElse(Seq.empty).asInstanceOf[Seq[String]] //strings
override def getInts(): Seq[Int] = entities.get(classOf[Int]).map(_.toSeq).getOrElse(Seq.empty).asInstanceOf[Seq[Int]]

override def get[T]: Seq[T] = entities.get(classOf[T]).map(_.toSeq).getOrElse(Seq.empty).asInstanceOf[Seq[T]]
override def put[T](t: T): Unit = entities.addBinding(t.getClass, t)
}


Here is the error :

[error] Container.scala:23: class type required but T found
[error]       override def get[T]: Seq[T] = entities.get(classOf[T]).map(_.toSeq).getOrElse(Seq.empty).asInstanceOf[Seq[T]]


#### How to manage type equivalence [duplicate]

Given this kind of type-aware container, I want to manage type equivalence.

trait Container {
def put[T](t: T)
def get[T](implicit m: ClassTag[T]) : Seq[T]
}

class MutableContainer extends Container {
val entities = new mutable.HashMap[Class[_], mutable.Set[Any]]() with mutable.MultiMap[Class[_], Any]

override def get[T](implicit ct: ClassTag[T]): Seq[T] = entities.get(ct.runtimeClass).map(_.toSeq).getOrElse(Seq.empty).asInstanceOf[Seq[T]]
override def put[T](t: T): Unit = entities.addBinding(t.getClass, t)
}


But these 2 gets doesn't return the same result. How to solve this issue ?

  val mc = new MutableContainer()
mc.put(1)
println(mc.get[Int])
println(mc.get[java.lang.Integer])


### Dave Winer

Snapshot of last version of Scripting News, before the switch-over.

### CompsciOverflow

#### how to verify permutation generated in constant amortized time?

Here is an algorithm that generates the next permutation in lexicographic order, changing the given permutation in-place:

1. Find the largest index k such that a[k] < a[k+1]. If no such index exists, exit (the permutation is the last permutation).
2. Find the largest index l such that a[k] < a[l].
3. Swap a[k] with a[l].
4. Reverse the sequence from a[k+1] up to and including the final element a[n].

Is the next permutation generated in constant amortized time and if yes, how to verify it?

### StackOverflow

#### how to get the datetime of the last transaction in a datomic db?

I want to find the most recent transaction made to a connection. The following does not seem to give the correct date:

(require '[datomic.api :as datomic])

(-> conn datomic/db datomic/basis-t datomic/t->tx (java.util.Date.))


### CompsciOverflow

#### Model Checking: hardware vs software

In short: What is the basic difference that allows model checking for hardware to be "easily" solvable, but makes it undecidable for software?

I guess it has to boil down to the difference between hardware being finite automata and software having the expressiveness of a Turing machine, which would basically mean that its the infinite amount of (randomly accessible, vs. that of a push-down automaton) memory that the Turing machine has available -- is this correct?

### QuantOverflow

#### Uniqueness of equivalent martingale measure in Black Scholes-Model

Let's consider standard Black-Scholes model with price process $S_t$ satisfying SDE $dS_t = S_t(bdt + \sigma dB_t)$, where $B_t$ is standard Brownian Motion for probability $\mathbb{P}$. I understand the proof of existence of martingal measure $\mathbb{Q}$ equivalent to $\mathbb{P}$ based on Girsanov theorem, but I can't see how to derive uniqueness of $\mathbb{Q}$. Can you help?

### StackOverflow

#### Checking odd parity in clojure

I have the following functions that check for odd parity in sequence

(defn countOf[a-seq elem]
(loop [number 0 currentSeq a-seq]
(cond (empty? currentSeq) number
(= (first currentSeq) elem) (recur (inc number) (rest currentSeq))
:else (recur number (rest currentSeq))
)
)
)

(defn filteredSeq[a-seq elemToRemove]
(remove (set (vector (first a-seq))) a-seq)
)

(defn parity [a-seq]
(loop [resultset [] currentSeq a-seq]
(cond (empty? currentSeq) (set resultset)
(odd? (countOf currentSeq (first currentSeq))) (recur (concat resultset (vector(first currentSeq))) (filteredSeq currentSeq (first currentSeq)))
:else (recur resultset (filteredSeq currentSeq (first currentSeq)))
)
)
)


for example (parity [1 1 1 2 2 3]) -> (1 3) that is it picks odd number of elements from a sequence.

1. Is there a better way to achieve this?

2. How can this be done with reduce function of clojure

#### how to instantiate Unit in Scala?

All I desire is to use some concurrent Set (that appears not to exist at all). Java uses java.util.concurrent.ConcurrentHashMap<K, Void> to achieve that behavior. I'd like to do sth similar in Scala so I created instance of Scala HashMap (or Java ConcurrentHashMap) and tried to add some tuples:

val myMap = new HashMap[String, Unit]()
myMap + (("myStringKey", Unit))


This of course crashed the process of compilation as Unit is abstract and final.

How to make this work? Should I use Any/AnyRef instead? I must ensure nobody inserts any value.

Thanks for help

### CompsciOverflow

#### How would one go about creating a method that moves/animates an object from a text file? (java)

Assuming the text file looks like this:

GO 10 20 5

where 10 is the x coordinate, 20 the y coordinate, and 5 being the amount of time (in seconds) to move the object to the desired x-y coordinate.

### StackOverflow

#### SSL with Blue eyes

I am considering using the Blue eyes framework in order to build HTTPS RESTful webservices in Scala for a project. However, as the documentation is pretty poor, I can't figure out if we can configure it in order to work with SSL or not?

#### Why didn't scala design around Integer Overflow?

I am a former Java developer and I have recently watched the insightful and entertaining introduction to Scala for Java developers by professor Venkat Subramaniam (https://www.youtube.com/watch?v=LH75sJAR0hc).

A major point introduced is the elimination of declared types in lieu of "type inference". Presumably, this means the higher-order compiler recognizes the type I intend to use, by the context.

Being an application security expert by trade, the first thing I tried to do is break this type inference... Example:

// declare a function that returns the square of an input Int. The return type is to be inferred.
scala> val square = (x:Int) => x*x
square: Int => Int = <function1>
// I can see the compiler inferred an Int for the output value, which I do not agree with.

scala> square(2147483647)
res1: Int = 1
// integer overflow


My question is why did the compiler not see that "*" is an operator with a threat of overflow, and wrap the inputs in something a little more protective like a BigInteger?

According to the professor, I am supposed to forget about the internal implementation and just get on with my business logic. But after my quick demonstration I'm not so sure that Scala is safe for a programmer who doesn't understand what the compiler is doing with my methods.

#### simple construction function with play Form

I'm trying to make a simple form which is return a model. The form parameters do not exactly map to the parameters of the case class. I'm following the code sample listed here:

https://www.playframework.com/documentation/2.0/ScalaForms

  val registerUserForm: Form[User] = Form {
mapping(
"email" -> text,
"name" -> text
)(
(user: User) => Some(user.email, user.password, user.name)
)
}


Here's the case class:

case class User(email: String, password: String, name: String, posts: Seq[Post])


I can't see anything that I'm doing that differs form the code sample but intellij is spouting several errors at me.

#### Cannot use Scala Worksheet after update

After a recent update for ScalaIDE, I cannot open any existing worksheet nor creating new worksheet. The error message is

Plug-in org.scalaide.worksheet was unable to load class org.scalaide.worksheet.editor.ScriptEditor.


I am experiencing this problem both on Mac (Yosemite) and Ubuntu 14.04.
I am using:

• Eclipse Luna
• Java 8
• Scala 2.11.2
• Scala IDE 4.0.0 Release Candidate 1! (4.4)

Any help is appreciated.

Here is the error detail

org.eclipse.core.runtime.CoreException: Plug-in org.scalaide.worksheet was unable to load class org.scalaide.worksheet.editor.ScriptEditor.
at org.eclipse.core.internal.registry.osgi.RegistryStrategyOSGI.throwException(RegistryStrategyOSGI.java:194)
at org.eclipse.core.internal.registry.osgi.RegistryStrategyOSGI.createExecutableExtension(RegistryStrategyOSGI.java:178)
at org.eclipse.core.internal.registry.ExtensionRegistry.createExecutableExtension(ExtensionRegistry.java:905)
at org.eclipse.core.internal.registry.ConfigurationElement.createExecutableExtension(ConfigurationElement.java:243)
at org.eclipse.core.internal.registry.ConfigurationElementHandle.createExecutableExtension(ConfigurationElementHandle.java:55)
at org.eclipse.ui.internal.WorkbenchPlugin$1.run(WorkbenchPlugin.java:294) at org.eclipse.swt.custom.BusyIndicator.showWhile(BusyIndicator.java:70) at org.eclipse.ui.internal.WorkbenchPlugin.createExtension(WorkbenchPlugin.java:289) at org.eclipse.ui.internal.registry.EditorDescriptor.createEditor(EditorDescriptor.java:235) at org.eclipse.ui.internal.EditorReference.createPart(EditorReference.java:349) at org.eclipse.ui.internal.e4.compatibility.CompatibilityPart.createPart(CompatibilityPart.java:265) at org.eclipse.ui.internal.e4.compatibility.CompatibilityEditor.createPart(CompatibilityEditor.java:63) at org.eclipse.ui.internal.e4.compatibility.CompatibilityPart.create(CompatibilityPart.java:303) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.eclipse.e4.core.internal.di.MethodRequestor.execute(MethodRequestor.java:55) at org.eclipse.e4.core.internal.di.InjectorImpl.processAnnotated(InjectorImpl.java:888) at org.eclipse.e4.core.internal.di.InjectorImpl.processAnnotated(InjectorImpl.java:869) at org.eclipse.e4.core.internal.di.InjectorImpl.inject(InjectorImpl.java:120) at org.eclipse.e4.core.internal.di.InjectorImpl.internalMake(InjectorImpl.java:337) at org.eclipse.e4.core.internal.di.InjectorImpl.make(InjectorImpl.java:258) at org.eclipse.e4.core.contexts.ContextInjectionFactory.make(ContextInjectionFactory.java:162) at org.eclipse.e4.ui.internal.workbench.ReflectionContributionFactory.createFromBundle(ReflectionContributionFactory.java:104) at org.eclipse.e4.ui.internal.workbench.ReflectionContributionFactory.doCreate(ReflectionContributionFactory.java:73) at org.eclipse.e4.ui.internal.workbench.ReflectionContributionFactory.create(ReflectionContributionFactory.java:55) at org.eclipse.e4.ui.workbench.renderers.swt.ContributedPartRenderer.createWidget(ContributedPartRenderer.java:127) at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.createWidget(PartRenderingEngine.java:983) at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:662) at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.safeCreateGui(PartRenderingEngine.java:766) at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.access$2(PartRenderingEngine.java:737)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$7.run(PartRenderingEngine.java:731) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.createGui(PartRenderingEngine.java:715) at org.eclipse.e4.ui.workbench.renderers.swt.StackRenderer.showTab(StackRenderer.java:1250) at org.eclipse.e4.ui.workbench.renderers.swt.LazyStackRenderer$1.handleEvent(LazyStackRenderer.java:68)
at org.eclipse.e4.ui.services.internal.events.UIEventHandler$1.run(UIEventHandler.java:40) at org.eclipse.swt.widgets.Synchronizer.syncExec(Synchronizer.java:187) at org.eclipse.ui.internal.UISynchronizer.syncExec(UISynchronizer.java:156) at org.eclipse.swt.widgets.Display.syncExec(Display.java:4721) at org.eclipse.e4.ui.internal.workbench.swt.E4Application$1.syncExec(E4Application.java:218)
at org.eclipse.e4.ui.services.internal.events.UIEventHandler.handleEvent(UIEventHandler.java:36)
at org.eclipse.equinox.internal.event.EventHandlerWrapper.handleEvent(EventHandlerWrapper.java:197)
at org.eclipse.equinox.internal.event.EventHandlerTracker.dispatchEvent(EventHandlerTracker.java:197)
at org.eclipse.equinox.internal.event.EventHandlerTracker.dispatchEvent(EventHandlerTracker.java:1)
at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230)
at org.eclipse.osgi.framework.eventmgr.ListenerQueue.dispatchEventSynchronous(ListenerQueue.java:148)
at org.eclipse.equinox.internal.event.EventComponent.sendEvent(EventComponent.java:39)
at org.eclipse.e4.ui.services.internal.events.EventBroker.send(EventBroker.java:81)
at org.eclipse.e4.ui.internal.workbench.UIEventPublisher.notifyChanged(UIEventPublisher.java:59)
at org.eclipse.emf.common.notify.impl.BasicNotifierImpl.eNotify(BasicNotifierImpl.java:374)
at org.eclipse.e4.ui.model.application.ui.impl.ElementContainerImpl.setSelectedElement(ElementContainerImpl.java:171)
at org.eclipse.e4.ui.internal.workbench.ModelServiceImpl.showElementInWindow(ModelServiceImpl.java:488)
at org.eclipse.e4.ui.internal.workbench.ModelServiceImpl.bringToTop(ModelServiceImpl.java:454)
at org.eclipse.e4.ui.internal.workbench.PartServiceImpl.delegateBringToTop(PartServiceImpl.java:694)
at org.eclipse.e4.ui.internal.workbench.PartServiceImpl.bringToTop(PartServiceImpl.java:387)
at org.eclipse.e4.ui.internal.workbench.PartServiceImpl.showPart(PartServiceImpl.java:1134)
at org.eclipse.ui.internal.WorkbenchPage.busyOpenEditor(WorkbenchPage.java:3210)
at org.eclipse.ui.internal.WorkbenchPage.access$23(WorkbenchPage.java:3125) at org.eclipse.ui.internal.WorkbenchPage$9.run(WorkbenchPage.java:3107)
at org.eclipse.swt.custom.BusyIndicator.showWhile(BusyIndicator.java:70)
at org.eclipse.ui.internal.WorkbenchPage.openEditor(WorkbenchPage.java:3102)
at org.eclipse.ui.internal.WorkbenchPage.openEditor(WorkbenchPage.java:3066)
at org.eclipse.ui.internal.WorkbenchPage.openEditor(WorkbenchPage.java:3056)
at org.eclipse.ui.ide.IDE.openEditor(IDE.java:541)
at org.eclipse.ui.ide.IDE.openEditor(IDE.java:500)
at org.eclipse.jdt.internal.ui.javaeditor.EditorUtility.openInEditor(EditorUtility.java:360)
at org.eclipse.jdt.internal.ui.javaeditor.EditorUtility.openInEditor(EditorUtility.java:167)
at org.eclipse.jdt.ui.actions.OpenAction.run(OpenAction.java:268)
at org.eclipse.jdt.ui.actions.OpenAction.run(OpenAction.java:233)
at org.eclipse.jdt.ui.actions.SelectionDispatchAction.dispatchRun(SelectionDispatchAction.java:275)
at org.eclipse.jdt.ui.actions.SelectionDispatchAction.run(SelectionDispatchAction.java:251)
at org.eclipse.jdt.internal.ui.packageview.PackageExplorerActionGroup.handleOpen(PackageExplorerActionGroup.java:376)
at org.eclipse.jdt.internal.ui.packageview.PackageExplorerPart$4.open(PackageExplorerPart.java:538) at org.eclipse.ui.OpenAndLinkWithEditorHelper$InternalListener.open(OpenAndLinkWithEditorHelper.java:48)
at org.eclipse.jface.viewers.StructuredViewer$2.run(StructuredViewer.java:853) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.eclipse.ui.internal.JFaceUtil$1.run(JFaceUtil.java:50)
at org.eclipse.jface.util.SafeRunnable.run(SafeRunnable.java:178)
at org.eclipse.jface.viewers.StructuredViewer.fireOpen(StructuredViewer.java:850)
at org.eclipse.jface.viewers.StructuredViewer.handleOpen(StructuredViewer.java:1142)
at org.eclipse.jface.viewers.StructuredViewer$6.handleOpen(StructuredViewer.java:1249) at org.eclipse.jface.util.OpenStrategy.fireOpenEvent(OpenStrategy.java:278) at org.eclipse.jface.util.OpenStrategy.access$2(OpenStrategy.java:272)
at org.eclipse.jface.util.OpenStrategy$1.handleEvent(OpenStrategy.java:313) at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:84) at org.eclipse.swt.widgets.Display.sendEvent(Display.java:4188) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1467) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1490) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1475) at org.eclipse.swt.widgets.Widget.notifyListeners(Widget.java:1279) at org.eclipse.swt.widgets.Display.runDeferredEvents(Display.java:4031) at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3658) at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$9.run(PartRenderingEngine.java:1151)
at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.run(PartRenderingEngine.java:1032)
at org.eclipse.e4.ui.internal.workbench.E4Workbench.createAndRunUI(E4Workbench.java:148)
at org.eclipse.ui.internal.Workbench$5.run(Workbench.java:636) at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332) at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:579) at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:150) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:135) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:134) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:104) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:380) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:235) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:648) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:603) at org.eclipse.equinox.launcher.Main.run(Main.java:1465) Caused by: java.lang.NoClassDefFoundError: org/scalaide/ui/editor/InteractiveCompilationUnitEditor at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:760) at org.eclipse.osgi.internal.loader.ModuleClassLoader.defineClass(ModuleClassLoader.java:272) at org.eclipse.osgi.internal.loader.classpath.ClasspathManager.defineClass(ClasspathManager.java:632) at org.eclipse.osgi.internal.loader.classpath.ClasspathManager.findClassImpl(ClasspathManager.java:588) at org.eclipse.osgi.internal.loader.classpath.ClasspathManager.findLocalClassImpl(ClasspathManager.java:540) at org.eclipse.osgi.internal.loader.classpath.ClasspathManager.findLocalClass(ClasspathManager.java:527) at org.eclipse.osgi.internal.loader.ModuleClassLoader.findLocalClass(ModuleClassLoader.java:324) at org.eclipse.osgi.internal.loader.BundleLoader.findLocalClass(BundleLoader.java:320) at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:395) at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:345) at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:337) at org.eclipse.osgi.internal.loader.ModuleClassLoader.loadClass(ModuleClassLoader.java:160) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at org.eclipse.osgi.internal.framework.EquinoxBundle.loadClass(EquinoxBundle.java:568) at org.eclipse.core.internal.registry.osgi.RegistryStrategyOSGI.createExecutableExtension(RegistryStrategyOSGI.java:174) ... 115 more Caused by: java.lang.ClassNotFoundException: org.scalaide.ui.editor.InteractiveCompilationUnitEditor cannot be found by org.scalaide.worksheet_0.2.6.v-2_11-201410211328-8101792 at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:432) at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:345) at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:337) at org.eclipse.osgi.internal.loader.ModuleClassLoader.loadClass(ModuleClassLoader.java:160) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 131 more  #### Mark import as used for IntelliJ I have an import statement of an implicit function that IntelliJ thinks is unused due to a bug. Whenever I execute the Organize Imports command, IntelliJ removes that import line. Is there a way to tell IntelliJ to leave this line alone? #### Spark: produce RDD[(X, X)] of all possible combinations from RDD[X] Is it possible in Spark to implement '.combinations' function from scala collections?  /** Iterates over combinations. * * @return An Iterator which traverses the possible n-element combinations of this$coll.
*  @example  "abbbc".combinations(2) = Iterator(ab, ac, bb, bc)
*/


For example how can I get from RDD[X] to RDD[List[X]] or RDD[(X,X)] for combinations of size = 2. And lets assume that all values in RDD are unique.

### TheoryOverflow

#### Connecting partial paths to form a hamiltonian cycle

For an undirected graph that consists of partial paths such that each vertex is a part of one of those paths and that there are edges between all the paths, is there an efficient algorithm to connect all the paths to form one hamiltonian cycle?

### /r/compsci

#### What am I missing?

I'm a self taught programmer, I work as a backend engineer, a freelance mobile developer and an electronics hobbyist. I'm currently switching major to Computer Science for my masters degree.

I'm really interested in Mathematics that are useful for Computer Science and that can be used in my programming career.

What I covered so far: - Discrete math - Linear algebra - Calculus - Probability and probabilistic models(like bayes and Markov) - A bit of statistics. - Theory of Computation (Sipser book)

Even though I have and read books from the above mentioned fields, I still face problems with, for example neural networks, I just don't get how they work. I barely can implement a backpropagation neural network. I understand the concept behind it, but I don't understand how the math behind it is leading to the concept of Learning. I don't know, maybe I'm missing a good introductory machine learning book.

I believe a programmer should be diverse in his learnings, at least reading different fields and understanding their cores. So, after finishing reading an image processing book, I'm looking forward for the below - Compression - Encryption (I know all the practical stuff, no idea about how to implement one for example and the theory behind it) - Computer Vision - Machine Learning (Made it through half of Andrew Ng course) Yes, I'm interested in all the above. I know the practical stuff and I used tools such as OpenCV and others. But I prefer learning the core of these tools. What Math topics should I cover other than the ones I mentioned above? because I'm ordering couple of books soon(If you read a good book you can recommend please do so).

It seems I'm missing something in Mathematics that I just can't see it yet and I have no idea what to do.

submitted by m0ei

### StackOverflow

#### Apache Spark filter elements

I have two RDDs: points and pointsWithinEps. Their content is in the figures below:

Vector is representing x, y coordinate. pointsWithinEps represent two points and distance between them. I want to loop all points and for every point filter only that elements which are in the pointsWithinEps as x (first) coordinate. So for the first point it will give [0] and [1] vectors from pointsWithinEps. I have following code:

for (i <- 0 until points.count.toInt) {
val p = points.take(i + 1).drop(i)
val currentPointNeighbours = pointsWithinEps.filter {
case ((x, y), distance) =>
x == p
}
currentPointNeighbours.foreach(println)
println("----")
}


It does not work correctly. What is wrong with the code?

### TheoryOverflow

#### Doubts about dichotomy theorem [on hold]

Good afternoon.

I have a little doubt about the schaeffer dichotomy theorem:

http://www.ccs.neu.edu/home/lieber/courses/csg260/f06/materials/papers/max-sat/p216-schaefer.pdf

I ve seen in that paper that every relation that has exactly one negated literal per clause is in P

So something like this:

¬x1 ^ ¬x2 xor x3 ^ ¬x4 or x5 or x6 or x7

should be in P, im right?

Now i ve seen in this other paper that the result can be refined and we can determine exactly the complexity class of a boolean formula and not only that is in P:

http://people.cs.umass.edu/~immerman/pub/cspJCSS.pdf

I looked the paper but i dont know anything about polimorfisms apart that is something about object oriented programming

Could you tell me exactly whats the complexity class of a formula that has exactly one negated literal per clause?

Thanks

Sorry for my english

### QuantOverflow

#### General way to solve Partial differential equation using Feynman kac representation

Consider the following PDE on interval [0,T]

$\left(\frac{\partial F}{\partial t}(t,x)+\mu (t,x)\frac{\partial F}{\partial x}+\frac{1}{2}\sigma^2(t,x)\frac{\partial^2F}{\partial x^2}(t,x)=rF(t,x)\right)$ with condition that

$F(T,x)=\phi(x)$

and let X(t) Solve the stochastic differential equation:

$dX(t)=\mu dt+ \sigma dW(t)$ with initial condition

$X(t)=x$

In order to solve it i use It$\acute{o}$ on F(t,x) and get:

$dF=\left[\frac{\partial F}{\partial t}+\mu \frac{\partial F}{\partial x}+\frac{1}{2}\sigma^2 \frac{\partial^2 F}{\partial x^2} \right]dt + \sigma \frac{\partial F}{\partial x}dW$

ans since the problem state the the equation in bracket is equal to $rF(t,x)$, we can re write it and get:

$dF=rFdt+\sigma \frac{\partial F}{\partial x}dW$

if we integrate this from t to T and then take the expectations value we end up with this:

$F(t,x)=e^{-r(T-t)}E^Q[\phi(x)]$ which is the final answer

and this is usually the general way to solve this kind of Partial differential equations. I really do not understand the Integration and expectation part and really appreciate any kind of hints or steps to do it

### StackOverflow

#### Resources for Building Dynamic Lift Shopping Cart?

Here's what I'd like to do with Lift: I want to build a dynamic shopping cart, with lines able to be added and removed via AJAX calls. The total needs to be wired to the specific lines. Each line would include a number, the length of time for a lease, and a calculated price based on that, so I would have to add wired cells on each addable/removable line as well. So it would look something like this:

Number          Length of Lease          Price                      Remove?
(AJAX Textbox)  (AJAX Dropdown Select)   (Plain Updateable Text)    (Ajax Checkbox)
(Another Row)...
Total: ______


The problem I'm running into is that I can find resources to build a static page that displays all of this via Wiring. Using the Lift Demo site, I can pull up code that will let me add new lines, but it doesn't seem to me to be conducive to removing lines (this in general is one of my frustrations with Lift at the moment: a "little extra detail" to change from a tutorial ends up requiring me to completely change tacks and spend hours more at work and research, and I want to figure out how I'm probably approaching these problems wrongly!). Alternatively, I can use CSS selectors to dynamically create content, but I don't know how to effectively wire these together.

In addition, all of my attempts end up creating 2-3 times the amount of code I would have written to simply do some JQuery updates on the page, so I suspect that I'm doing something wrong and overcomplicating everything.

What resources would people recommend to set me on the right path?

#### Future language for Java Developers [on hold]

I am working as a java developer in UK for the last 6 years ans experienced in frameworks like spring and hibernate

But I am in confusion on getting expertise on current technologies that are in demand in the job market. I do not have enough free time to try learning several other languages/frameworks in the market currently, but want to learn Scala if its in most demanding . But still in confusion if its worth learning scala if we think in terms of future java releases as java might outrun scala in future

So could you guys suggest any best server side languages/frameworks so that i could try learning them and improve my skills ?

Any suggestions/opinions ?

Thanks

### CompsciOverflow

#### Solve a basic mutex lock problem while preventing deadlock

I am solving a problem with a bridge, where you have a one way bridge with people coming from each side. To solve this problem while preventing deadlock (two people going opposite directions on the bridge at the same time), I used a mutex lock. However, this could still result in starvation. How would I prevent starvation?

### StackOverflow

#### Play + Scala + Reactivemongo + Rest Query with 2 params

I downloaded the typesafe application "modern-web-template", which implements a crud application with play + scala + reactivemongo

I was trying to add a new functionality. I want to be able by calling a URL with two params like this

localhost:9000/users?dni&30000000


first I added this route to the routes file

GET     /users                      @controllers.Users.findUsersParams(tipoDocumento: String ?= "", numeroDocumento:  String ?= "")


Then I added this method to the controller

def findUsersParams(tipoDocumento: String, numeroDocumento: String) = Action.async {
// let's do our query
val cursor: Cursor[User] = collection.
// find all
find(Json.obj("tipoDocumento" -> tipoDocumento, "numeroDocumento" -> numeroDocumento)).
// sort them by creation date
sort(Json.obj("created" -> -1)).
// perform the query and get a cursor of JsObject
cursor[User]

// gather all the JsObjects in a list
val futureUsersList: Future[List[User]] = cursor.collect[List]()

// transform the list into a JsArray
val futurePersonsJsonArray: Future[JsArray] = futureUsersList.map { users =>
Json.arr(users)
}
// everything's ok! Let's reply with the array
futurePersonsJsonArray.map {
users =>
Ok(users(0))
}
}


I am not able to return the expected result that should be one user, instead, I am getting all the users inside the collection

#### Looking for a clean solution to validate the input in json with play framework

It is straightforward to read the expected fields (no matter optional or not) and validate them, but I fail on how to throw an exception properly if an unexpected field is detected in the given json. It'd be great if the play framework could help on it with one or two statements. Yes, I can process the json and get all the fields' names and compare them to the expected list, which appears to be a bit complicated (when the json input's format is complicated).

For example, the input json is expected as following:

{
"param": 1,
"period": 2,
"threshold": 3,
"toggle": true
}


and the scala code to fultill the class EventConfig from the json input:

import play.api.libs.json._
import play.api.libs.functional.syntax._

case class EventConfig(param: Int, period: Int, threshold: Int, toggle: Boolean)

object EventConfig {
)(EventConfig.apply _)
}


I'd like to have an exception thrown if an unexpected filed is detected in the json such as

{
"param": 1,
"period": 2,
"threshold": 3,
"toggle": true,
"foo": "bar"
}


### CompsciOverflow

#### What problem does cache coloring solve?

According to what I have read from two different sources, cache coloring is (was?) required in order to:

• Counter the problem of aliasing: Prevent two different virtual addresses with the same physical address from mapping to different cache sets. (According to a CS Stack Exchange Answer)

• Exploit the spatial locality property of virtual memory: by guaranteeing that two adjacent blocks of virtual memory (not necessarily adjacent in physical memory), do not map to the same cache index. (According to Wikipedia)

These seem to me to be fundamentally different definitions, and without comprehending the motivation for cache coloring, I can't seem to understand the mechanism for selecting the number of colors required. Are they really one and the same?

If the spatial locality of virtual memory is the primary motivation, is cache coloring really required for VIPT caches, where the index of the cache is derived from the virtual memory to begin with? Or is cache coloring simply used in VIPT caches to get around aliasing?

### /r/compsci

#### What seminar topics would be best for presentation, for students who don't have that much interest in Computer science? (List inside)

I need a good topic and my teacher gave me 100 topics to choose from: i picked some ones that caught my eyes. but i don't know what will interest the students and which topic is interesting for me to do a seminar about. so please, help!

• HTML5 -no
• Sixth sense technology -no
• Phishing techniques -maybe
• Extreme Programming -no
• High speed data in mobile network -no
• Computer human interface -no
• Holographic data storage -no (too hard)
• Cyber terrorism -Interesting and simple
• Artificial Intelligence
• Chameleon Chip -A chip that changes itself dynamically
• Smart Dust - mini robot-like dust that detects, temperature, chemicals, vibrations, light and more
• Dark Net -maybe. simple. and interesting to everyone
• Plastic Memory -no
• Smart Fabrics -no
• Bio Chip -no
• Brain Computer Interface -no
• Computer Clothing -no
• Skinput - maybe needs more research
• Tongue Drive System -no (although really interesting)
• Mobile Computing -no
• Reverse Engineering -no
• Virtual campus -no
• Telepresence - no
• 3D Internet -no
• Mobile Virtual reality -no
• Paper Battery -no
• Recombinant DNA -no
• Face recognition technology -no
• Windows DNA -no
• Pixie Dust -no
• Parasitic Computing -no
• Bio Chips -no
• Digital smell technology -no
• Augmented Reality -no
• MPEG-7
• Wearable Bio sensors -no

I have Until Tomorrow

In the meantime I'll check each topic One-by-One

Edit: so far my pics are

• Cyber terrorism -Interesting and simple
• Artificial Intelligence interesting
• Chameleon Chip -A chip that changes itself dynamically
• Dark Net -maybe. simple. and interesting to everyone
submitted by seever

### TheoryOverflow

#### What makes for a good paper abstract?

I am writing a paper for a TCS conference and wondering what the community's opinion is on how abstracts in TCS should be written, good practices, or rules of thumb. I thought that I had read enough papers to be able to answer this question on my own, but after thinking about it a bit, I'm stumped.

Some specific questions I do not know the answer to:

• What is the purpose of the abstract? Is it to help those who do not want to read the paper figure this out as early as possible? To help a future researcher understand whether the paper contains a result of use to them? To communicate the key novelty/results/advancement of the paper? Something else?
• Who is the intended audience of the abstract? General computer science researchers, general TCS researchers, or researchers who specialize in the topic of the paper?
• Similarly, to what extent should technical terms be introduced/defined in the abstract?
• Does motivation belong in the abstract? Sometimes if I am not careful I find my abstracts turning into introductions.
• What is the value of listing precise results versus outlining the general question addressed? In other words, how specific should the abstract be regarding the results? It seems hard to hit multiple levels of specific/general without being redundant.
• When is it appropriate to mention/summarize prior work and background?

I was looking over responses to this question on examples of good TCS papers. There did not seem to be enough samples for me to learn, however :). The approaches to abstract-writing seem to vary widely.

### StackOverflow

#### Testing a scala class with junit

I'm trying to implement a simple stack in scala and test it with both junit and scalatest to get a feel for the differences. However, I'm running into some issues.

Here is the scala class:

class Stack[T] {
var list: List[T] = Nil

def push(x : T) { list = x :: list }
def pop() : T = {
list = list.tail
}
def isEmpty(): Boolean = list == Nil
}


I compiled this with scalac, and tried to call it from a plain java junit class like this:

public class StackTests {

@Test
public void testPush(){
Stack<String> s = new Stack<String>();
}
}


However when I try to run StackTests:

java -cp ../junit/junit.jar:../junit/hamcrest-core.jar:. org.junit.runner.JUnitCore StackTests


.

...
There was 1 failure:
1) testPush(StackTests)
java.lang.NoClassDefFoundError: scala/collection/immutable/List
at StackTests.testPush(StackTests.java:9)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
...


The error message is pretty clear, but I don't know how to fix it. Do I have to reference immutable list in the java test class somehow?

### QuantOverflow

#### Historical data resources for Indian market

What is the best source for historical EOD data for Indian stock market? The data from Yahoo finance for some companies is not up-to-date and Google finance doesn't provide adjusted close prices. What should I do ? I need the data for quantitative analysis.

Is there a way to calculate adjusted close from Google finance close price series ?

If I'm using the expression (close[n]-close[n-1])/close[n-1] for returns calculations. It will show erroneous data, because of the change in price due to stock split.

### StackOverflow

#### Scala: cleaning up constructor parameters

I am just learning Scala and am concerned about cleaning up constructor parameters.

In Java I had a class like this:

public class Example {
private A a;
private B b;

private SelectorA aSelector;
private SelectorB bSelector;

public Example(SelectorA aSelector, SelectorB bSelector) {
this.aSelector = Objects.requireNonNull(aSelector);
this.bSelector = Objects.requireNonNull(bSelector);
}

public void start() {
if (a == null) {
a = aSelector.select();
aSelector = null; // Removing reference.
}

if (b == null) {
b = bSelector.select();
bSelector = null; // Removing reference.
}

// Go on.
}
}


Of course, it was more complex, with parameters and stuff. This class is supposingly a long-living one and I just wanted to make sure it doesn't hold any references it does not need.

I am about to port this class to Scala and made such class:

class Example(_aSelector: Selector[A], _bSelector: Selector[B]) {
private lazy val _a = _aSelector() // Will _aSelector reference be cleared?
private lazy val _b = _bSelector() // Will _bSelector reference be cleared?

def start() = {
// Use _a.
// Use _b.
// Go on.
}
}


Again, it's going to be more complex but the idea is clear. So, the question:

Is Scala capable of detecting which constructor parameters (_aSelector and _bSelector in the example above) are no longer needed? Or do I have to explicitly clear the reference?

_a
_aSelector = null
_b
_bSelector = null


P.S. I understand that GC is the one detecting out-of-scope references but Scala compiler is the one defining variable scope therefore it's the one defining behaviour.

### TheoryOverflow

#### Applications of $p$-adic numbers in CS

Are there any concrete (or a rich source of) examples of application of $p$-adic numbers in computer science?

### QuantOverflow

#### API-based equity screeners?

I know there are APIs from different brokers that allows you to trade and also obtain information about specific companies, but I wonder if there are equity/asset screeners that are API-based and can be triggered in real-time? For example, I'd love to have an API that would alert me of any equities that are:

• Near 52 week low
• Have P/E < 30
• 5 year average earnings growth is > 5%
• etc.

I could do it myself with brokerage screeners, but they are totally manual; I'd have to build and run them. If there are APIs that can be tied to automated scripts, that'd be amazing, as it would mean both speed and coverage in terms of trading opportunities.

### CompsciOverflow

#### Tree graph descent ordering

I'm working on a point cloud subdivided into an Octree, but could be and other kind of tree graph as well. From a given query leaf node, i need to visit all the neighboring leafs. To I start with the query node, then make a recursive descent into its siblings (unless they are already leafs), then into the siblings of its parent, grand-parent, etc down to the root.

The leafs need to be visited in a given order given by an order function (node a, node b) -> bool. A callback function is called for each visited node, and when false is returned the algorithm should stop. In practice it will only visit a small portion of the tree.

So it would be inefficient to store the full set of leafs into a list and sort it using the ordering function, and then iterate through it. Instead only subsets of the leaves will be sorted, i.e. the siblings of a node that was traversed before, and child nodes during the depth/breath first descents.

So the ordering would need to be satisfy certain properties, depending on how the tree is traversed, to ensure that it will remain correct on the full sequence of visited leafs.

My question is what are the mathematical fundamentals of this, e.g. what topics to look for in graph theory etc for this specific problem.

### StackOverflow

#### how to remove a particular occurances from sequence clojure

If I have sequence

[1 1 1 1 3 2 4 1]


how can I remove a particular number from that sequence? For example

(remove [1 1 1 1 3 2 4 1] 1) -> [3 2 4]

#### Java 7 to Java 8 breaks our Lift JSON parser 2.6-RC1

I have a weird Scala runtime error running Lift JSON parser 2.6-RC1 and Java8.

The code for the error is at https://github.com/listatree/lift-json-java8

The stacktrace error is at: https://github.com/listatree/lift-json-java8/blob/master/stacktrace.txt

The error states a ScapaSigParserError validating the class coming from the Lift JSON parser.

I know that the Lift JSON parser does extreme things with reflection and/or the AST tree, but so far was working.

NOTE: If I tested with java 8 on a Mac OS X it works, if I tested with java 8 on a Linux machine it breaks (sometimes). With java 7 works all the time

#### Apache Spark foreach and filter error

I am implementing one algorithm in Apache Spark using Scala. I do not understand what is wrong with the foreach loop and why IntellijIDEA is highlighting the loop.

Where pointsWithinEps has following structure (it is holding two points and distance between them):

((DenseVector(1.1, 1.2),DenseVector(0.1, 0.1)),2.21)
((DenseVector(1.1, 1.2),DenseVector(0.4, 2.1)),1.3000000000000003)


and points have following:

DenseVector(1.1, 1.2)
DenseVector(6.1, 4.8)


How can I fix the loop? I want to loop all points and filter pointsWithinEps => first points equals point in foreach loop.

#### Scala import not working - object Database is not a member of package com.me.project.controllers.com.me.project.database

I have an issue when trying to import in scala. The object Database exists under com.me.project.database but when I try to import it:

import com.me.project.database.Database


I get the error:

object Database is not a member of package com.me.project.controllers.com.me.project.database


Any ideas what the problem is?

Edit:

It is worth mentioning that the import is in the file Application.scala under the package com.me.project.controllers, I can't figure out why it would append the import to the current package though, weird...

Edit 2:

So using:

import _root_.com.me.project.database.Database


Does work as mentioned below. But should it work without the _root_? The comments so far seem to indicate that it should.

So it turns out that I just needed to clean the project for the import to work properly, using both:

import _root_.com.me.project.database.Database

import com.me.project.database.Database


are valid solutions. Eclipse had just gotten confused.

#### Play Authorization

I have written a Secured trait for authorization in an application, as follows:

trait Secured { self: FutureConverter =>

import scala.concurrent.ExecutionContext.Implicits.global

case Some(value) =>
if(value == "some value") { Future(Some(value)) }
else Future(None)
case None => Future(None)
}

def withAuth(f: => String => Request[AnyContent] => Result) = {
Action(request => f(user)(request))
}
}
}


When I run the code, I get an error: missing arguments for method username in trait Secured;

But in the manual it does not explain how to pass RequestHeader into username?

### CompsciOverflow

#### Interaction between Dynamic scoping and evaluation order of expressions

Given the following (arbitrary language, although I think it is close to Algol 60) program:

program main;                               // A main parent level
var i : integer;                          // A 'global' variable

(* Note that all parameters are passed by value here *)

function f1 (j : integer) : integer;      // A Child function
begin { f1 }
i := i + 3;
f1 := 2 * j - i;
end; { f1 }

function f2 (k : integer) : integer;      // Another Child function, same level as f1
var i : integer;                        // Here, there is a variable that is declared
begin { f2 }                                 // but no value assigned
i := k / 2;
f2 := f1(i) + f1(k);
end; { f2 }

begin { main }                              // Running/Calling/Executing the code
i := 8;
i := i + f2(i);
writeln(i);
end. { main }


How would you trace the values of variables throughout the program when it is interpreted using Dynamic scoping of free variables, when the arguments appearing in expressions are evaluated left to right, and when they are evaluated right to left, so that the user can watch what happens

I have created a JS plnkr for Static Scoping with Left to Right evaluation and another for Static Scoping with Right to Left evaluation. Feel free to adapt these answers (if possible) for Dynamic Scoping, with L->R and R->L evaluation.

I chose plnkrs because I knew I could get the Static/lexical side using JS, but am unsure of how to make it happen dynamically or in another interactive environment (preferably not one I have to install).

I learn a bit slower on some problems like this where the values are asked of the output, but don't show the value states throughout the program, and trying to get a better understanding, especially in an example I can play around with interactively, as the book examples are really bad. In the code above, it also gets challenging, because it appears that the variable i in line 2 is allocated, but would be undefined. But that may be my imperative/functional brain making it more complicated than it is...

### /r/emacs

#### Company and python-mode auto-expanding

I seem to be having a really strange issue. I am using company for auto-completion and in python-mode, it seems to be causing anything I type to be automatically expanded. For example, if I type "R", about a second later it automatically expands to "RETURN".

It seems to be looking for dabbrevs in all of my buffers.

EDIT: I forgot to post the minimal working init.el using emacs 24.4:

(add-hook 'after-init-hook 'global-company-mode) (autoload 'python-mode "python-mode" "Python Mode." t) (add-to-list 'auto-mode-alist '("\\.py\\'" . python-mode)) (add-to-list 'interpreter-mode-alist '("python" . python-mode)) 

note that this is with python-mode not python.el (I don't have the same problem with just python.el).

submitted by amyannick

### QuantOverflow

#### Why is that a risk averse consumer buys the optimum insurance when there is actuarially fair insurance?

I think I understand the fact that when marginal utilities of the same function are equal (a consequence of the actuarially fair insurance), the independent variables in it must be equal -- right? But what it is the reason in this for a consumer being risk averse? What a $u''<0$ changes in comparison to a $u">0$ condition?

Edit: Example found here

"As a risk-averse consumer, you would want to choose a value of $x$ so as to maximize expected utility, i.e.

Given actuarially fair insurance, where $p = r$, you would solve: $\max \left[pu(w - px - L + x) + (1-p)u(w - px)\right]$, since in case of an accident, you total wealth would be $w$, less the loss suffered due to the accident, less the premium paid, and adding the amount received from the insurance company.

Differentiating with respect to $x$, and setting the result equal to zero, we get the first-order necessary condition as: $(1-p)pu'(w - px - L + x) - p(1-p)u'(w - px) = 0$,

which gives us: $u'(w - px - L + x) = u'(w - px)$

Risk-aversion implies $u'' < 0$, so that equality of the marginal utilities of wealth implies equality of the wealth levels, i.e.

$w - px - L + x = w - px$,

so we must have $x = L$.

So, given actuarially fair insurance, you would choose to fully insure your car. Since you're risk-averse, you'd aim to equalize your wealth across all circumstances - whether or not you have an accident.

However, if $p$ and $r$ are not equal, we will have $x < L$; you would under-insure. How much you'd underinsure would depend on the how much greater $r$ was than $p$."

Now, how the condition $u''<0$ changes anything to reach the result expressed above?

### StackOverflow

#### Securing a local-only Compojure/Ring Webapp?

I want to create a Ring/Compojure webapp that would enable a clojure repl. All of this is meant to be running on a local machine and the webapp is just a convenient GUI for the local user. Since it is a enabling a repl, I want it to be secure. But, I also want it to be convenient.

Here's my idea:

• a user would run "lein ring server" to start the app.
• this opens the web-browser and causes it to fetch "/" from the app.
• any session request gets a unique session cookie, but the first session cookie is saved specially on the server
• any requests later check to see the current session cookie == first session cookie

Do you believe this is secure? I'm concerned with requests to your machine from other hosts and this seems to do the trick. I'm worried about other users logged into the local machine--could they see that session cookie somehow? Would I have to enable an https server to really fix this?

I would certainly love to hear if this is a solved problem...

Thanks!

#### Scala solution to nQueen using for-comprehension

I have some difficulty in understanding the Scala solution to the n Queens problem, below is the implementation assuming isSafe is defined correctly

def queens(n: Int): Set[List[Int]] = {
def placeQueens(k: Int): Set[List[Int]] = k match {
case 0 => Set(List())
case _ =>
for {
queens <- placeQueens(k - 1)
col <- 0 until n
if isSafe(col, queens   )
}yield k :: queens
}

placeQueens(n)
}


The for comprehension, as I have seen, theoretically should return a buffered collection, and I see here it buffers a list of queens with k :: queens, but it indeed returns a Set[List] as defined. Can someone throw some light on how this for comprehension works?

Is my assumption correct that for every time will return a collection of collections and since in this case I deal with a Seq and a Set in the nested for for expression it is returning a Set[List].

The question is more related to the for comprehension in implementing nQueen not nQueen in general.

### QuantOverflow

#### Understanding how to calculate tracking error

I have come across two ways of calculating Tracking Error (TE) but i'm not sure if they are essentially the same.

The first way is to calculate the standard deviation of the difference between a fund's returns and a benchmark as shown here.

The second method is to run a regression and calculate the standard deviation of the error terms and shown here in section 8.

Many of the academic papers I have read use the latter. My question is, do these 2 methods yield the same answer?

### StackOverflow

#### Coursera - Functional Programming Principles in Scala - can't work with example project because of errors

From that course https://class.coursera.org/progfun-004/assignment

Imported this to Intellij Idea.

But the problem is to verify code, because in course they running sbt in console...

After run "sbt" in console i get:

D:\learning\example>sbt
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
ass)' is broken
(bad constant pool tag 15 at byte 1501)
[error] Type error in expression


I created new project in Intellij Idea with SBT and it works... but version of SBT is other then in example project. But when i changing version of SBT to newest i get dependency errors... I stack and can't move... How to solve situation like that?

I guess i can try move all project to Java8 or force sbt in my console to works with Java7. I don't know how to do both :)

#### Missing *out* in Clojure with Lein and Ring

I am running Lein 2 and cider 0.7.0. I made a sample ring app that uses ring/run-jetty to start.

(ns nimbus-admin.handler
(:require [compojure.core :refer :all]
[compojure.handler :as handler]
[clojure.tools.nrepl.server :as nrepl-server]
[cider.nrepl :refer (cider-nrepl-handler)]
[clojure.tools.trace :refer [trace]]
[ring.util.response :refer [resource-response response redirect content-type]]
[compojure.route :as route])
(:gen-class))

(defroutes app-routes
(GET "/blah" req "blah")
(route/resources "/")

(def app (handler/site app-routes))

(defn start-nrepl-server []
(nrepl-server/start-server :port 7888 :handler cider-nrepl-handler))

(defn start-jetty [ip port]
(ring/run-jetty app {:port port :ip ip}))

(defn -main
([] (-main 8080 "0.0.0.0"))
([port ip & args]
(let [port (Integer. port)]
(start-nrepl-server)
(start-jetty ip port))))


then connect to it with cider like:

cider-connect 127.0.0.1 7888


I can navigate to my site and eval forms in emacs and it will update what is running live in my nrepl session, so that is great.

I cannot see output, either with (print "test") (println "test") (trace "out" 1)

Edit I CAN see output, using (.println System/out msg)

Finally, my project file:

(defproject nimbus-admin "0.1.0"
:description ""
:url ""
:min-lein-version "2.0.0"
:dependencies [[org.clojure/clojure "1.6.0"]
[com.climate/clj-newrelic "0.1.1"]
[com.ashafa/clutch "0.4.0-RC1"]
[ring "1.3.1"]
[clj-time "0.8.0"]
[midje "1.6.3"]
[org.clojure/tools.nrepl "0.2.5"]
[ring/ring-json "0.3.1"]
[org.clojure/tools.trace "0.7.8"]
[compojure "1.1.9"]
[org.clojure/data.json "0.2.5"]
[org.clojure/core.async "0.1.346.0-17112a-alpha"]
]
:plugins [[lein-environ "1.0.0"]
[cider/cider-nrepl "0.7.0"]]


I start the site with lein run

#### Why I cant add field to scala implicit class?

Why you can not add a field to the implicit class?

for example the below code does not store the originalName field

object Implicit1 {

implicit class AI1(c: Class[_]) {
private var originalName = ""

def setOriginalName(str: String) = originalName = str

def getOriginalName=originalName
}
}


### CompsciOverflow

#### How to prove a problem is NOT NP-Complete?

Is there any general technique for proving a problem NOT being NP-Complete?

I got this question on the exam that asked me to show whether some problem (see below) is NP-Complete. I could not think of any real solution, and just proved it was in P. Obviously this is not a real answer.

NP-Complete is defined as the set of problems which are in NP, and all the NP problems can be reduced to it. So any proof should contradict at least one of these two conditions. This specific problem, is indeed in P (and thus in NP). So I am stuck with proving that there is some problem in NP that can't be reduced to this problem. How on the earth can this be proven??

Here is the specific problem I was given on exam:

Let $DNF$ be the set of strings in disjunctive normal form. Let $DNFSAT$ be the language of strings from $DNF$ that are satisfiable by some assignment of variables. Show whether or not $DNFSAT$ is in NP-Complete.

### StackOverflow

#### How to assign a variable to another variables in Ansible in a correct way?

I've faced an unexpected problem with Ansible. Here's simplified example.

I have defined some global variables in groups_vars/all file like this:

---


And use it like this in inventory file:

[physical-hosts]
phyn01 node="{{ node01 }}" ansible_ssh_host="{{ node.ipv4_address }}"


The funny part is that Ansible can ssh on each of the hosts and get the facts. But I can not get the value of 'node' variable for each host when executing playbook (I have additional data there).

The working example:

- hosts: physical-hosts
- name: get node variable for current host
debug: var=node


The output is in this case:

TASK: [get node variable for current host] ************************************
ok: [phyn01] => {
"node": {
"some_info": "data"
}
}


But I can't get the same if I use the following:

- hosts: physical-hosts
- debug: var=hostvars.{{item}}.node
with_items: groups['physical-hosts']


It reports as a wrong answer the following:

TASK: [debug var=hostvars.{{item}}.node] **************************************
ok: [phyn01] => (item=phyn01) => {
"hostvars.phyn01.node": "{{ node01 }}",
"item": "phyn01"
}


Summary:

1. I need to access 'some_data' for each host in a group without redefining the same variables once more for each host individually (lots of code duplication => lots of errors)
2. As you can see from example they way I want it to work seems to be clear. And it work when we are connecting to the host (ansible_ssh_host resolves correctly) and individual 'var=node' also resolves correctly. Facts are delivered of course.
3. This way doesn't work only when I try to get this data for the whole group and it seems that I'm using some wrong syntax.

So the questions are:

1. How to get 'some_data' for every host?
2. How to define host=hostN in a correct way? I need to use the same constructions as node.some_data for each host and I must define ansible_ssh_host each time because the same hosts may be in different groups (with different data).

Thanks for attention

upd: I was writing from memory, so there were bunch of mistypes. Now the output and typos are real and fixed

#### Haskell all valuations for variables given domain of values

For an assignment we have to create a Haskell function which, given a list of (String, [Integer]) (where the Integer list represents the domain of values for the String) tuples gives all possible combinations of Strings and Integers.

So for example, the input valuations [ ("FirstVar", [1..3]), ("SecondVar", [1,2]) ] should yield:

[ [("FirstVar", 1), ("SecondVar", 1)], [("FirstVar", 1), ("SecondVar", 2)], [("FirstVar", 2), ("SecondVar", 1)], [("FirstVar", 2), ("SecondVar", 2)], [("FirstVar", 3), ("SecondVar", 1)], [("FirstVar", 3), ("SecondVar", 2)] ]


And this should work for n lists. So far, I've only made it work for two lists, but I'm still having trouble with higher-order functions, hence I am confused how I should make this work for n lists.

How I did it for two lists was through a function Valuations:

valuations :: (String, [Integer]) -> (String, [Integer]) -> [[(String, Integer)]]
valuations (a, bs) (c, ds) = pairValuations (makeLists(a, bs)) (makeLists(c, ds))

pairValuations :: a -> a -> [a]
pairValuations xs ys = [ [x, y] | x <- xs, y <- ys]

makeLists :: (String, [Integer]) -> [(String, Integer)]
makeLists (a, bs) = [(a, b) | b <- bs]


Then valuations ("FirstVar", [1..3]) ("SecondVar", [1,2]) does indeed give the desired result. But I'm having troubles to expand this functionality for multiple lists. I hope someone can help me out.

### CompsciOverflow

#### Find the number of positive subsequences in a sequence of numbers with an algorithm that is faster than $O(n^{1.5})$

Find the total number of positive subsequences in a sequence of numbers with an algorithm that is faster than $O(n^{1.5})$. For example, {-2 3} is a positive subsequence of {1 -2 3} since -2+3 = 1 is positive. The following is my code:

import java.util.*;

class PSIv3 {

public static int mergeSort(int[] a, int i, int j){
int count = 0;

if ( i < j ){
int mid = (i+j)/2;
count += mergeSort(a, i, mid);
count += mergeSort(a, mid+1, j);
count += merge(a, i, mid, j);
} else {

if ( a[i] > 0 ){
count++;
return count;
}

}

return count;
}

public static int merge(int[] a, int i, int mid, int j){

int sum = a[mid] + a[mid+1];
int count = 0;
int tempsum;

if (sum > 0)
count++;

for ( int l = mid + 2; l <= j; l++ ){
sum = sum + a[l];
if ( sum > 0 ){
count++;
}
}

sum = a[mid] + a[mid+1];

for ( int k = mid - 1; k >= i; k-- ){
sum = sum + a[k];

if ( sum > 0 )
count++;

tempsum = sum;

for ( int l = mid + 2; l <= j; l++ ){
tempsum = tempsum + a[l];
if ( tempsum > 0 ){
count++;
}
}

}

return count;
}

public static void main(String [] args)
{
Scanner sc = new Scanner(System.in);
int numberOfElements = sc.nextInt();
int[] intArray = new int[numberOfElements];

for ( int i = 0; i < numberOfElements; i++ ){
intArray[i] = sc.nextInt();
}

int count = PSIv3.mergeSort(intArray, 0, numberOfElements-1);

System.out.println(count);

}


}

My algorithm is to modify the mergeSort algorithm with the aim of getting an $O(N log N)$ algorithm. Instead of sorting, I increment the count when I encounter a positive subsequence. When two portions, say A and B merge together, there are 3 possibilities to consider: 1) the subsequence is in A 2) the subsequence is in B or 3)the subsequence is not in A and not in B. Since 1) and 2) are already counted. I only have to consider 3. Every subsequence in 3 must contain a[mid] and a[mid+1]. Else it will be in 1) or 2). Therefore, I start from this subsequence and expand to the left and expand to the right, incrementing the count if the subsequence is positive.

My algorithm works, but it is not fast enough to pass the test. Hence, I am wondering if anyone has a faster algorithm or can tell me how to optimise the existing algorithm. Thanks.

### StackOverflow

#### CLojure: Higher order functions vs protocols vs multimethods

there are plenty protocols vs multimethods comparisions, but why not to use higher order functions? Let's come with example: We have some data (record for example). And we have methods serialize and deserialize. Say that we want to save it into file, into json, and into database. Should we create protocol called SerializationMethod and records called database, json, file that implement them? It looks kind of hack to create records only to use protocol. Second solution - multimethod - could take string parameter with serialization output and decide how to do this. But I am not sure that is right way to go... And third way is to write function serialize and then pass there data and serializing function. But now I can not name serializing and deserializing method with same name (json fo example):

(defn serialize [method data]
(method data))

(defn json[data]
(...))


The question is how can I (or how should I) do this. Is there more generic way with higher order function? Or maybe I don't understand something well? That are my first steps with clojure so please be tolerant.

### CompsciOverflow

#### Acyclic Graph in NL

From the book The Nature of Computation by Moore and Mertens, exercise 8.9:

Consider the problem ACYCLIC GRAPH of telling whether a directed graph is acyclic. Show that the problem is in NL, and then show that the problem is NL-complete.

I am mainly interested in the first part. It's quite easy to show that it's in coNL (just guess a walk, vertex by vertex, from a vertex $v$ that returns to $v$), and then we can use the Immerman-Szelepcsényi Theorem to prove that it's in NL.
But I was not able to construct a Turing machine directly, i.e. a machine that shows that the problem is in NL without the help of Immerman-Szelepcsényi or the construction from their proof. My question thus is:

How can I show directly that ACYCLIC GRAPH is in NL?

#### 2-way Graph Partitioning problem

We have a graph $G=(V,E)$ and we need to divide this graph into two clusters $A$ and $B$. Some pairs of vertices $u$, $v$ should not be in the same cluster, and we define an edge $(u,v) \in E$. The total cost of a solution is the total number of edges with endpoints both in the same cluster. My goal is to find an algorithm that returns the optimal solution that minimizes this total cost. The solution does not have to be balanced in the sense that the clusters should be of the same size.

I was wondering what the general name for this problem is and what a proper algorithm is that can solve this efficiently. By a colleague I was pointed into the direction of using a (nice) tree decomposition.

### StackOverflow

#### how to cache the results of an sbt TaskKey?

I have an expensive task that I need to reference in my tests

lazy val exampleSources = TaskKey[Seq[File]]("exampleSources", "for use in tests")

exampleSources := (updateClassifiers in Test).value.select(
artifact = artifactFilter(classifier = "sources")
)


(and then I can pass exampleSources.value as a parameter to my forked tests)

However, every time I run a test, this task is called, and updateClassifiers (expensive) is called. But I'm happy caching the value on first call and then using that for the session.

Without writing the cache myself, is there any way to do this using built-in sbt objects?

UPDATE: this doesn't work. Second evaluation has CACHE=true but the resolution tasks still run.

lazy val infoForTests = TaskKey[Seq[String]]("infoForTests", "for use in tests")

val infoForTestsCache = collection.mutable.Buffer[String]()

infoForTests := {
println("CACHE=" + infoForTestsCache.nonEmpty)
if (infoForTestsCache.isEmpty) {
infoForTestsCache ++= Seq[String](
"-Densime.compile.jars=" + jars((fullClasspath in Compile).value),
"-Densime.test.jars=" + jars((fullClasspath in Test).value),
"-Densime.compile.classDirs=" + classDirs((fullClasspath in Compile).value),
"-Densime.test.classDirs=" + classDirs((fullClasspath in Test).value),
"-Dscala.version=" + scalaVersion.value,
// sorry! this puts a source/javadoc dependency on running our tests
"-Densime.jars.sources=" + (updateClassifiers in Test).value.select(
artifact = artifactFilter(classifier = "sources")
).mkString(",")
)
println("CACHE=" + infoForTestsCache.nonEmpty)
}
infoForTestsCache
}


### QuantOverflow

#### Is there a broad currency index just like there is an equity market index?

I would like to assess the performance of currency traders so I was wondering if there is a broad currency index that can be used as a benchmark to assess the performance of these traders. The index would represent a passive investment strategy in currencies.

Thanks!

### StackOverflow

#### Apache Spark distance between two points using squaredDistance

I have a RDD colletions of vectors, where each vector represent a point with x and y coordinates. For example, file is as follows:

1.1 1.2
6.1 4.8
0.1 0.1
9.0 9.0
9.1 9.1
0.4 2.1


  def parseVector(line: String): Vector[Double] = {
DenseVector(line.split(' ')).map(_.toDouble)
}

val lines = sc.textFile(inputFile)
val points = lines.map(parseVector).cache()


Also, I have an epsilon:

  val eps = 2.0


For each point I want to find its neighbors who are within the epsilon distance. I do:

points.foreach(point =>
// squaredDistance(point, ?) what should I write here?
)


How can I loop all points and for each point find its neighbors? Probably using map function?

### StackOverflow

#### How to use SBT for interdependent projects in different configurations

I would like to have the following SBT build setup:

object MyBuild extends Build {

lazy val core = Project("core", file("core"))
.dependsOn(testkit % "test")

lazy val testkit = Project("testkit", file("testkit"))
.dependsOn(core % "compile")
}


When core is the main module, including domain objects, and testkit is a module for testing support code (builders, matchers, test drivers, etc.; not the tests themselves) that depends on the domain objects and other classes/utils in core.

For this setup SBT gives a Cyclic reference error, although there isn't really a cyclic dependency because of the use of different configurations (core compiles, then testkit compiles depending on core, then core test is compiled depending on both).

I found a dirty way to get around this problem by replacing one of the dependsOn use unmanagedClasspath, for example:

.settings(unmanagedClasspath in Compile <+= (packageBin in (LocalProject("core"), Compile)))


This feels like a hack, and also makes sbt-idea generate incorrect IntelliJ projects (among other things).

Any idea for a better solution? Does SBT support such a structure?

### /r/compsci

#### Good source for learning Data Structures in C?

With programs included, not just pure theory.

submitted by Neeraj85

### Fred Wilson

#### Video Of The Week: Computer Science Is A Liberal Art

I love this bit from Steve Jobs. It’s a clip from Cringely’s interview which I blogged a couple weeks ago.

This clip is only 53 seconds so everyone can spare that minute and watch it.

### StackOverflow

#### In Elm, when value under a signal has a compound type such as a list, how to efficiently update one element

I'm rather new to Elm, and I'm deeply attracted by the way Elm dealing with GUI. But after some deep thought, I find it's hard to efficiently update just one element of a list or tree which is under a Signal and the size of it also varies against time.

Specifically, to express a dynamic list, we have to write

Signal [ {-the list's element type-} ]

But if we want to update just one element of the list efficiently, we have to write

Signal [ Signal {-the core data type-} ]

But in Elm the Signal is not a Monad, then how to flatten the two layer Signals into one layer?

Comment:

I don't know in detail how Elm behaves in this situation.
Reprocessing the whole list is just my guess.


#### Sending a response back when a chain of actors are involved in Spray

I'm implementing a REST endpoint in Spray.

Here is the flow along with the name of the actors that are responsible for each of the step below.

1. Invoke the REST API and pass the required parameters (ActorSupervisor)
2. Validate the parameters (ValidateActor)
3. Call an external datastore to get data based on the parameter values (DataStoreActor) 3a. The external datastore API returns a Future
4. Pass the Future in step 3a. to an actor that can process the data (ProcessingActor)
5. Returned the processing results back to the client (as Future[ HTTP Response])

The most expensive step is #4 (which may take anywhere from a 400ms to 5 minutes depending on the size of the input dataset.

My question is how can I return a HTTP response back from #5?

### StackOverflow

#### How to pass function as parameters

I have a function that receives a vector and sum all the elements.

(def rec
(fn [numbers acc]
(if (empty? numbers)
acc
(recur (rest numbers) (+ acc (first numbers))))))
(prn (rec [1 2 3] 0))


But instead of calling the function "+" I want to pass the operation as parameter, it means, I want to pass a function as parameter and then call the function.

I tried:

(def rec
(fn [f numbers acc]
(if (empty? numbers)
acc
(recur (rest numbers) (f acc (first numbers))))))
(prn (rec + [4 2 1] 0))


But it does not work, I know there are better ways to sum numbers in a vector, but I'm starting with functional, so it is important to do this kind of exercise.

#### Scala: Processing Future[Seq[scala.xml.NodeSeq]] in Play 2.3

I have a question concerning the processing a Future[Seq[scala.xml.NodeSeq]] object using the play framework 2.3. I am attempting to do this in Scala.

Consider the following JSON is posted to the controller action

{ "studentNo": "12345678", "subjects": [ { "name": "maths" }, { "name": "physics" } ] }


Our action correctly captures the request and validates it (assume case classes and implicit Reads are correctly added). On success, we need to go ahead and create a web service call for each subject and then process the future sequence (data format is XML!!!).

def getResults = Action(BodyParsers.parse.json) { request =>
request.body.validate[Subject].fold(
errors => {
BadRequest(Json.obj("status" -> "KO", "message" -> JsError.toFlatJson(errors)))
},
student => {

val baseURL = "https://www.pathtowebservice.com/script.php"

val params = Map[String, String](
)

def buildURL(subject: String) =
baseURL + "?studentNo=" + student.studentNo + "&" +
"subject=" + subject + "&" +
(for ((key, value) <- params) yield (Key + "=" + value)).mkString("&")
}

val futureResponses: Future[Seq[scala.xml.NodeSeq]] = Future.sequence {
student.subjects.map(WS.url(buildUrl(_.name))).get().map {
response => response.xml \ "response"
}
}

// Process future XML responses
)
}


Considering a failed response will look like

<response>
<result>FAIL</result>
<errors>
<error>You have not supplied a valid student number</error>
</errors>
</response>


and a success will have the SUCCESS instead of FAIL with the rest of the data underneath, how would we process the future responses? I honestly cannot find any examples of how to process XML responses, evaluate them whether they succeeded or failed, and pass the information back to the user.

#### Scala pattern matching multiple combinator parsers results

Having a List[String] and several parsers I want to pattern match each String from the List to parsers. So it'd look like this (Warning, pseudo-code):

myStringList.map{
case MyParser.keyword => keyword match {
case KeywordParser.keyword1 => //it's special keyword1
case KeywordParser.keyword2 => //special treatment for keyword2
case NotSpecial => //it's a usual command
}
case MyParser.stringValue => //etc...
}


Why would I want to do so?

I'm parsing a simple script line, that contains "strings" and $(keywords). Some of the keywords are special and need to be treated separately. Currently I have only one special keyword, so I'm using chained parseAll and match, but that doesn't feel right. So, how it can be done? #### Is it required to explicitly stop an Akka actor from supervisor? In my supervisor actor a message is sent by child actor to indicate its completed. I then stop this actor within the onReceive method : @Override public void onReceive(Object msg) { if (msg == Messages.SUCCESS) { getContext().stop(getSender()); } }  But this results in following message being printed :  [INFO] [10/24/2014 12:12:43.102] [Main-akka.actor.default-dispatcher-6] [akka://Main/user/app/$l] Message [akka.dispatch.sysmsg.Terminate] from Actor[akka://Main/user/app/$l#1444168887] to Actor[akka://Main/user/app/$l#1444168887] was not delivered. [6] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.


Is this expected behaviour ? Can these messages be ignored ? Is it required to explicitly stop the actor or will it just stop once its run to completion and become eligible for garbage collection?

Update : it is required to explicitly stop an actor as it will not be stopped by akka framework :

Akka: Cleanup of dynamically created actors necessary when they have finished?

## What I want:

I have a clojure program on a remote site, let's call it mccarthy. What I want to do is connect to a nrepl-ritz from my laptop, preferably using nrepl-ritz-jack-in. The jack in works fine for a local program, but doesn't seem to connect to a remote program.

## Attempt 1

C-x C-f on /mccarthy:code/program/project.clj

(require 'nrepl-ritz)

M-x nrepl-ritz-jack-in

## Result

Emacs appears to hang. If I go to the *nrepl-server* buffer, I see this:

Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.flatland.drip.Main.invoke(Main.java:117)
at org.flatland.drip.Main.start(Main.java:88)
at org.flatland.drip.Main.main(Main.java:64)
Caused by: java.lang.AssertionError: Assert failed: project
at leiningen.ritz_nrepl$start_jpda_server.invoke(ritz_nrepl.clj:23) at leiningen.ritz_nrepl$ritz_nrepl.doInvoke(ritz_nrepl.clj:95)


(and tons of other lines, too...)

I am using drip on my laptop, but not on mccarthy, so clearly nrepl-ritz-jack-in is not detecting that it's a remote file. Regular old nrepl-jack-in will work as expected in this case, however.

## Attempt 2

I also tried starting an nrepl-ritz using lein on mccarthy:

mattox@mccarthy$lein ritz-nrepl nREPL server started on port 42874  From my laptop I forward a port so local 42874 connects to 42874 on mccarthy: ssh -L 42874:localhost:42874 -N mccarthy  Then, from my local Emacs: (require 'nrepl-ritz)  M-x nrepl Host: 127.0.0.1 Port: 42874 This gives me a connection: ; nREPL 0.1.7-preview user>  So to test it out, I run M-x nrepl-ritz-threads It gives me a nice table of threads. M-x nrepl-ritz-break-on-exception user> (/ 1 0)  ## Result This hangs, but sometimes shows a hidden debugging buffer with some restarts available. If I tell it to pass the exception back to the program, it never gives control back to the REPL. I've done plenty of searches but have not been able to get anything more specific than "make sure lein is on your path". (And I did do that, on both machines...) Thanks for reading a (I hope not too) long question! ### /r/clojure #### How do you organize your Core.async code? What is the most correct way to layout your code with core.async? Should long running (while true) blocks live inside of a function or is it adequate for them to stand alone? All of the examples use naked (let) blocks instead of functions, so i'm not exactly sure. Thanks! (defn f [in] (go (while true (println (<! in))))  vs a go block outside of any defn (def in (chan)) (go (while true (println (<! in)))  submitted by lunkdjedi [link] [2 comments] ### StackOverflow #### How to transform Scala nested map operation to Scala Spark operation? Below code calculates eucleudian distance between two List in a dataset :  val user1 = List("a", "1", "3", "2", "6", "9") //> user1 : List[String] = List(a, 1, 3, 2, 6, 9) val user2 = List("b", "1", "2", "2", "5", "9") //> user2 : List[String] = List(b, 1, 2, 2, 5, 9) val all = List(user1, user2) //> all : List[List[String]] = List(List(a, 1, 3, 2, 6, 9), List(b, 1, 2, 2, 5, //| 9)) def euclDistance(userA: List[String], userB: List[String]) = { println("comparing "+userA(0) +" and "+userB(0)) val zipped = userA.zip(userB) val lastElements = zipped match { case (h :: t) => t } val subElements = lastElements.map(m => ((m._1.toDouble - m._2.toDouble) * (m._1.toDouble - m._2.toDouble))) val summed = subElements.sum val sqRoot = Math.sqrt(summed) sqRoot } //> euclDistance: (userA: List[String], userB: List[String])Double all.map(m => (all.map(m2 => euclDistance(m,m2)))) //> comparing a and a //| comparing a and b //| comparing b and a //| comparing b and b //| res0: List[List[Double]] = List(List(0.0, 1.4142135623730951), List(1.414213 //| 5623730951, 0.0))  But how can this be translated into parallel Spark Scala operation ? When I print the contents of distAll : scala> distAll.foreach(p => p.foreach(println)) 14/10/24 23:09:42 INFO SparkContext: Starting job: foreach at <console>:21 14/10/24 23:09:42 INFO DAGScheduler: Got job 2 (foreach at <console>:21) with 4 output partitions (allowLocal=false) 14/10/24 23:09:42 INFO DAGScheduler: Final stage: Stage 2(foreach at <console>:2 1) 14/10/24 23:09:42 INFO DAGScheduler: Parents of final stage: List() 14/10/24 23:09:42 INFO DAGScheduler: Missing parents: List() 14/10/24 23:09:42 INFO DAGScheduler: Submitting Stage 2 (ParallelCollectionRDD[1 ] at parallelize at <console>:18), which has no missing parents 14/10/24 23:09:42 INFO MemoryStore: ensureFreeSpace(1152) called with curMem=115 2, maxMem=278019440 14/10/24 23:09:42 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 1152.0 B, free 265.1 MB) 14/10/24 23:09:42 INFO DAGScheduler: Submitting 4 missing tasks from Stage 2 (Pa rallelCollectionRDD[1] at parallelize at <console>:18) 14/10/24 23:09:42 INFO TaskSchedulerImpl: Adding task set 2.0 with 4 tasks 14/10/24 23:09:42 INFO TaskSetManager: Starting task 0.0 in stage 2.0 (TID 8, lo calhost, PROCESS_LOCAL, 1169 bytes) 14/10/24 23:09:42 INFO TaskSetManager: Starting task 1.0 in stage 2.0 (TID 9, lo calhost, PROCESS_LOCAL, 1419 bytes) 14/10/24 23:09:42 INFO TaskSetManager: Starting task 2.0 in stage 2.0 (TID 10, l ocalhost, PROCESS_LOCAL, 1169 bytes) 14/10/24 23:09:42 INFO TaskSetManager: Starting task 3.0 in stage 2.0 (TID 11, l ocalhost, PROCESS_LOCAL, 1420 bytes) 14/10/24 23:09:42 INFO Executor: Running task 0.0 in stage 2.0 (TID 8) 14/10/24 23:09:42 INFO Executor: Running task 1.0 in stage 2.0 (TID 9) 14/10/24 23:09:42 INFO Executor: Running task 3.0 in stage 2.0 (TID 11) a14/10/24 23:09:42 INFO Executor: Running task 2.0 in stage 2.0 (TID 10) 14/10/24 23:09:42 INFO Executor: Finished task 2.0 in stage 2.0 (TID 10). 585 by tes result sent to driver 114/10/24 23:09:42 INFO TaskSetManager: Finished task 2.0 in stage 2.0 (TID 10) in 16 ms on localhost (1/4) 314/10/24 23:09:42 INFO Executor: Finished task 0.0 in stage 2.0 (TID 8). 585 by tes result sent to driver 214/10/24 23:09:42 INFO TaskSetManager: Finished task 0.0 in stage 2.0 (TID 8) i n 16 ms on localhost (2/4) 6 9 14/10/24 23:09:42 INFO Executor: Finished task 1.0 in stage 2.0 (TID 9). 585 byt es result sent to driver b14/10/24 23:09:42 INFO TaskSetManager: Finished task 1.0 in stage 2.0 (TID 9) i n 16 ms on localhost (3/4) 1 2 2 5 9 14/10/24 23:09:42 INFO Executor: Finished task 3.0 in stage 2.0 (TID 11). 585 by tes result sent to driver 14/10/24 23:09:42 INFO TaskSetManager: Finished task 3.0 in stage 2.0 (TID 11) i n 31 ms on localhost (4/4) 14/10/24 23:09:42 INFO DAGScheduler: Stage 2 (foreach at <console>:21) finished in 0.031 s 14/10/24 23:09:42 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool 14/10/24 23:09:42 INFO SparkContext: Job finished: foreach at <console>:21, took 0.037641021 s  The distances are not populated ? Update : To get Eugene Zhulenev answer below to work for me I required to make following changes : extend UserObject with java.io.Serializable also rename User to UserObject. Here is updated code : val user1 = List("a", "1", "3", "2", "6", "9") //> user1 : List[String] = List(a, 1, 3, 2, 6, 9) val user2 = List("b", "1", "2", "2", "5", "9") //> user2 : List[String] = List(b, 1, 2, 2, 5, 9) case class User(name: String, features: Vector[Double]) object UserObject extends java.io.Serializable { def fromList(list: List[String]): User = list match { case h :: tail => User(h, tail.map(_.toDouble).toVector) } } val all = List(UserObject.fromList(user1), UserObject.fromList(user2)) val users= sc.parallelize(all.combinations(2).toSeq.map { case l :: r :: Nil => (l, r) }) def euclDistance(userA: User, userB: User) = { println(s"comparing${userA.name} and ${userB.name}") val subElements = (userA.features zip userB.features) map { m => (m._1 - m._2) * (m._1 - m._2) } val summed = subElements.sum val sqRoot = Math.sqrt(summed) println("value is"+sqRoot) sqRoot } users.foreach(t => euclDistance(t._1, t._2))  Update 2 : I've tried code in maasg answer but receive error : scala> val userDistanceRdd = usersRdd.map { case (user1, user2) => { | val data = sc.broadcast.value | val distance = euclidDistance(data(user1), data(user2)) | ((user1, user2),distance) | } | } <console>:27: error: missing arguments for method broadcast in class SparkContex t; follow this method with _' if you want to treat it as a partially applied funct ion val data = sc.broadcast.value  Here is the entire code with my amendments : type UserId = String type UserData = Array[Double] val users: List[UserId]= List("a" , "b") val data: Map[UserId,UserData] = Map( ("a" , Array(3.0,4.0)), ("b" , Array(3.0,4.0)) ) def combinations[T](l: List[T]): List[(T,T)] = l match { case Nil => Nil case h::Nil => Nil case h::t => t.map(x=>(h,x)) ++ combinations(t) } val broadcastData = sc.broadcast(data) val usersRdd = sc.parallelize(combinations(users)) val euclidDistance: (UserData, UserData) => Double = (x,y) => math.sqrt((x zip y).map{case (a,b) => math.pow(a-b,2)}.sum) val userDistanceRdd = usersRdd.map { case (user1, user2) => { val data = sc.broadcast.value val distance = euclidDistance(data(user1), data(user2)) ((user1, user2),distance) } }  #### Is there a standalone implementation of std::function? I'm working on an embedded system, so code size is an issue. Using the standard library ups my binary size by about 60k, from 40k to 100k. I'd like to use std::function, but I can't justify it for 60k. Is there a standalone implementation that I can use, or something similar? I'm using it to implicitly cast lambdas in member functions with bound variables in c++ 11. #### insert-sort with reduce clojure I have function (defn goneSeq [inseq uptil] (loop [counter 0 newSeq [] orginSeq inseq] (if (== counter uptil) newSeq (recur (inc counter) (conj newSeq (first orginSeq)) (rest orginSeq))))) (defn insert [sorted-seq n] (loop [currentSeq sorted-seq counter 0] (cond (empty? currentSeq) (concat sorted-seq (vector n)) (<= n (first currentSeq)) (concat (goneSeq sorted-seq counter) (vector n) currentSeq) :else (recur (rest currentSeq) (inc counter)))))  that takes in a sorted-sequence and insert the number n at its appropriate position for example: (insert [1 3 4] 2) returns [1 2 3 4]. Now I want to use this function with reduce to sort a given sequence so something like: (reduce (insert seq n) givenSeq)  What is thr correct way to achieve this? #### Clojure: How to find out the arity of function at runtime? Given a function object or name, how can I determine its arity? Something like (arity func-name) . I hope there is a way, since arity is pretty central in Clojure #### Create a directory using Ansible How to create a directory www at /srv on a debian based system using a Ansible playbook? #### ZeroMQ doesn't auto-reconnect I've just downloaded and installed zeromq-4.0.5 on an Unbutu Precise (12.04) system. I've compiled the hello-world client (REQ, connect, 127.0.0.1) and server (REP, bind) written in C. 1. I start the server. 2. I start the client. 3. Each second the client sends a message to the server, and receives a response. 4. I press Ctrl-C to stop the server. 5. The client tries to send its next outgoing message and it gets stuck in an never-returning epoll system call (as shown by strace). 6. I restart the server. 7. The zmq_recv call in the client is still stuck, even when the new server has been running for a minute. The only way to make progress for the client is to kill it (with Ctrl-C) and restart it. Q1: Is this the expected behavior? I'd expect that in a few seconds the client should figure out that the server is running again, and it would auto-reconnect. Q2: What should I change in the example code to fix this? Q3: Am I using the wrong version of the software, or is something broken on my system? I've disabled the firewall, sudo iptables -S prints -P INPUT ACCEPT; -P FORWARD ACCEPT; -P OUTPUT ACCEPT. In the strace -f ./hwclient output I can see that the client is trying connect() 10 times a second (the default value of ZMQ_RECONNECT_IVL) after the server went down. On the strace -f ./hwserver output I can see that the restarted server accept()s the connection. However, communication gets stuck after that, and the server never receives the actual request from the client (but it notices when I kill the client; also the server receives requests from other clients which have been started after the server restart). Using ipc:// instead of tcp:// causes the same behavior. The auto-reconnect happens in successfully in zmq_send if the server has been killed before the client does the next zmq_send. However, when the server gets killed while the client is running zmq_recv, then the zmq_recv blocks indefinitely, and the client can't seem to recover from that. I've found this article, which recommends using timeouts. However, I think that timeouts can't be the right solution, because the TCP disconnect notification is already available in the client process, and it's already acting on it -- it just doesn't make zmq_recv resend the request to the new server -- or at least return early indicating an error. ### DataTau #### Comparing elastic net to stochastic gradient descent for GLMs ### Planet Clojure #### Pre-Conj Interview: Paul deGrandis Paul deGrandis interview about data-driven systems. <λ> ### StackOverflow #### Play framework + Scala: Inject dependency using Action Composition I'm setting up a Play! app for our API. This API encapsulates different services. I want to inject these services inside an action but only the ones required for that particular endpoint. Something like: object Application extends Controller { def index = (UsersAction andThen OrdersAction) { // boom UsersService and OrdersService is available here for { users <- usersService.list orders <- ordersService.list } yield "whatever" } }  I've been playing with this idea and using ActionTransformers I'm able to transform the incoming Request to a request that has a given service, but I don't see how I can make that generic enough so I can compose these actions in an arbitrary order without create ActionTransformers for all the possible combinations of WrapperRequests. Maybe action composition is not the best way to achieve this. I'm all ears. Thank you UPDATE: To clarify, the code above is pseudocode, the ideal scenario, in which usersService and ordersService are made available to that scope (implicits? I don't know). If that's not possible, then whatever adds the less amount of noise of top of that sample that would work. Thanks ### /r/scala #### def multiply(m: Int)(n: Int): Int = m * n I'm going through Scala school basics and I noticed this: def multiply(m: Int)(n: Int): Int = m * n  What confuses me is (m: Int)(n: Int) ... it seems more common for it to be (m: Int, n: Int) ... what exactly is the meaning of (m: Int)(n: Int) and how does it differ from (m: Int, n: Int) ? submitted by metaperl [link] [14 comments] ### StackOverflow #### Codility TapeEquilibrium Scala I coded up a solution to the TapeEquilibrium problem on Codility using Scala. I've tried numerous test inputs of varying loads and when I run the results using the Codility Develipment environment and on eclipse, I get correct answers. However when I submit the result it fails on almost every test with teh wrong answer. I can't a hand on the exact inputs but I have generated similar sized inputs with random numbers and those inputs always work. I've looked over my logic for a while but can't figure out what I'm doing wrong. Can someone help me. The test can be found here Here is my Code import org.scalacheck.Gen import org.scalacheck._ object Problem1 extends App { def solution( A: Array[ Int ] ): Int = { var sumRight = A.foldLeft( 0 )( _ + _ ) var sumLeft = 0; def absDiffer( a: Int, b: Int ) = if ( a < b ) b - a else a - b def minimizer( ar: List[ Int ], prevDiff: Int, sumL: Int, sumR: Int ): Int = { val diff = absDiffer( sumL, sumR ) if ( diff <= prevDiff ) minimizer( ar.tail, diff, ar.head + sumL, sumR - ar.head ) else prevDiff } minimizer( A.toList, absDiffer( A.head, sumRight - A.head ), A.head, sumRight - A.head ) } def randomInput( length: Int ) = { Gen.listOfN( length, Gen.oneOf( Range( -1000, 1000 ) ) ).sample.get } def randomPosInput( length: Int ) = { Gen.listOfN( length, Gen.oneOf( Range( 1, 100 ) ) ).sample.get } def randomNegInput( length: Int ) = { Gen.listOfN( length, Gen.oneOf( Range( -1000, -1 ) ) ).sample.get } val ar = randomPosInput( 100000 ) val inputString = ar.mkString( "[", ", ", "]" ) val clipboard = java.awt.Toolkit.getDefaultToolkit().getSystemClipboard() val sel = new java.awt.datatransfer.StringSelection( inputString ) clipboard.setContents( sel, sel ) println( inputString ) println( solution( ar.toArray ) ) }  ### CompsciOverflow #### Efficient algorithm for regular expression matching To find out all the occurrences of texts matching a regular expression(R), e.g. 1*01* against a large chunk of text(T), what can the most efficient algorithm be? Had R been a fixed pattern, KMP/Boyen-Moore/RK could be the solution. ### StackOverflow #### Will ZMQ work under concurrent connection over 10K? I am going to launch a app using ZMQ.SUB as client and ZMQ.PUB as server using TCP In case, there are over 10K concurrent users, will ZMQ still work? I cannot simulate this case myself as I can only send over 10K message per second to ZMQ Server but I cannot simulate 10K connection to the server. Or ZMQ is totally "session-less" over TCP, so that I don't need to worry for this? Thanks All. #### Cannot add ReactiveMongo to Play-Framework ers, I am having troubles with integrating the ReactiveMongo within the Play framework. My build.sbt libraryDependencies ++= Seq( "org.reactivemongo" %% "play2-reactivemongo" % "0.9" )  When I try to run the server with the play run command I get the following error: [warn] :::::::::::::::::::::::::::::::::::::::::::::: [warn] :: UNRESOLVED DEPENDENCIES :: [warn] :::::::::::::::::::::::::::::::::::::::::::::: [warn] :: org.reactivemongo#play2-reactivemongo_2.9.2;0.9: not found [warn] :::::::::::::::::::::::::::::::::::::::::::::: sbt.ResolveException: unresolved dependency: org.reactivemongo#play2-eactivemongo_2.9.2;0.9: not found  The think that goes wrong is clear; it is looking for the 2.9.2 Scala version of the library. I have no idea why SBT is looking for 2.9, I have 2.10 installed. I have tried on several machines. $ scalac -version
Scala compiler version 2.10.2 -- Copyright 2002-2013, LAMP/EPFL


and

### CompsciOverflow

#### Origin of tFAW (Four Activation Window) in DRAM timing constraint

In DRAM timing constraints, tFAW means length of a rolling window that allows up to four row activations in same Rank. This constraint is mainly due to power budget of each rank.

However, I am curious why 4 is the magic number？Would it be wrong if we use Eight Activation Window and double the value or use Two Activation Window and halve the value?

### DragonFly BSD Digest

#### BSDNow 060: Don’t Buy a Router

BSDNow episode 060 bypasses the pun and just commands you to obey.  At least, I don’t know the reference if there is one.  Anyway, there’s an interview of Olivier Cochard-Labbé of the BSD Router Project, along with the usual array of news.

### StackOverflow

#### May a while loop be used with yield in scala

Here is the standard format for a for/yield in scala: notice it expects a collection - whose elements drive the iteration.

for (blah <- blahs) yield someThingDependentOnBlah


I have a situation where an indeterminate number of iterations will occur in a loop. The inner loop logic determines how many will be executed.

while (condition) { some logic that affects the triggering condition } yield blah


Each iteration will generate one element of a sequence - just like a yield is programmed to do. What is a recommended way to do this?

### StackOverflow

#### Can Python's functional programming be used to completely avoid interpreter and method overhead?

I want to achieve near-C speeds for working with sqlite and regex pattern searching. I'm aware of other libraries and FTS4 that will be faster or alternative solutions, but that's not what I'm asking.

I've discovered that as long as I don't use lambda or defined methods, or python code at all, certain primitives and C level functions exposed by CPython can be injected directly as sqlite custom functions, and when run, a boost of 10x is achieved, even if there are no operations done except return a constant. However, I'm not ready to dive into creating extensions, and I am trying to avoid having to use a tool like Cython to intermix C and python together.

I've devised the following test code that exposes these performance differences, and makes use of some speedups provided by a third party library, cytoolz (compose method) to achieve some functional-style logic while avoiding lambdas.

import sqlite3
import operator
from cytoolz import functoolz
from functools import partial
from itertools import ifilter,chain
import datetime
from timeit import repeat
import re,os
from contextlib import closing
db_path='testdb.sqlite'
existed=os.path.exists(db_path)
re_pat=re.compile(r'l[0-3]+')
re_pat_match=re.compile(r'val[0-3]+')
with closing(sqlite3.connect(db_path)) as co, co as co:
if not existed:
print "creating test data"
co.execute('create table test_table (testval TEXT)')
co.executemany('insert into test_table values (?)',(('val%s'%v,) for v in xrange(100000)))

def count(after_from=''):
print co.execute('select count(*) from test_table %s'%(after_from,)).fetchone()[0]

def python_return_true(v):
return True

co.create_function('python_return_true',1,python_return_true)
co.create_function('python_lambda_true',1,lambda x: True)
co.create_function('custom_lower',1,operator.methodcaller('lower'))
co.create_function('custom_composed_match',1,functoolz.compose(partial(operator.is_not,None),re_pat_match.match))
data=[None,type('o',(),{"group":partial(operator.truth,0)})] # create a working list with a fallback object
co.create_function('custom_composed_search_text',1,functoolz.compose(
operator.methodcaller('group'), # call group() on the final element (read these comments in reverse!)
next, # convert back to single element. list will either be length 1 or 2
partial(ifilter,None), # filter out failed search (is there a way to emulate a conditional method call via some other method??)
partial(chain,data), # iterate list (will raise exception if it reaches result of setitem which is None, but it never will)
partial(data.__setitem__,0), # set search result to list
re_pat.search # first do the search
))
co.create_function('custom_composed_search_bool',1,functoolz.compose(partial(operator.is_not,None),re_pat.search))
_search=re_pat.search # prevent an extra lookup in lambda
co.create_function('python_lambda_search_bool',1,lambda _in:1 if _search(_in) else None)
co.create_function('custom_composed_subn_alternative',1,functoolz.compose(operator.itemgetter(1),partial(re_pat.subn,'',count=1)))
for to_call,what in (
(partial(count,after_from='where 1'),'pure select'),
(partial(count,after_from='where testval'),'select with simple compare'),
(partial(count,after_from='where python_return_true(testval)'),'select with python def func'),
(partial(count,after_from='where python_lambda_true(testval)'),'select with python lambda'),
(partial(count,after_from='where custom_lower(testval)'),'select with python lower'),
(partial(count,after_from='where custom_composed_match(testval)'),'select with python regex matches'),
(partial(count,after_from='where custom_composed_search_text(testval)'),'select with python regex search return text (chain)'),
(partial(count,after_from='where custom_composed_search_bool(testval)'),'select with python regex search bool (chain)'),
(partial(count,after_from='where python_lambda_search_bool(testval)'),'select with python regex search bool (lambda function)'),
(partial(count,after_from='where custom_composed_subn_alternative(testval)'),'select with python regex search (subn)'),
):
print '%s:%s'%(what,datetime.timedelta(0,min(repeat(to_call,number=1))))


output with Python 2.7.8 32-bit (OS: windows 8.1 64 bit home), print statements omitted:

pure select:0:00:00.003457
select with simple compare:0:00:00.010253
select with python def func:0:00:00.530252
select with python lambda:0:00:00.530153
select with python lower:0:00:00.051039
select with python regex matches:0:00:00.066959
select with python regex search return text (chain):0:00:00.134115
select with python regex search bool (chain):0:00:00.067687
select with python regex search bool (lambda function):0:00:00.576427
select with python regex search (subn):0:00:00.136042


I'm probably going to go with some variation of the "select with python regex search bool (chain)" above. So my question is 2 part.

1. Sqlite3 will fail if a create_function() call creates a function that returns anything but a primitive that it understands, so the MatchObject that search() returns needs to be converted, hence the chained "is not null" method. For the search-text-returning function, this turns ugly (not very straight forward) as you can see in the source. Is there an easier alternative than the element-to-iterator conversion strategy I used when trying to make a non-python function optionally show MatchObject's group only if it is returned after searching a regex for use with sqlite3?

2. I am continuously battling with the speed of Python: whether to use database functions over python functions, or lists instead of dicts or objects, wasting lines of code copying variable names to the local namespace, using generators instead of additional method calls, or inlining loops and functions instead of benefiting from abstractions that Python can provide. What are some other functions / libraries that I should consider that will allow me to achieve huge efficiency payoffs (I'm talking at least 10x) while still using Python for scaffolding? I'm aware of programs that actually speed up the python code itself (pypi, cython), but they seem more risky to use or still suffer from how python's language constructs restrict optimization because it is assumed that the code is always 'interpreted'? Perhaps there are a few ctypes exposed methods and strategies that could pay off in the realm of fast text processing? I am aware of the libraries focused on scientific, statistical and mathematical speedups but I'm not particularly interested in that realm.

### /r/compsci

#### Noob Question - How do you package a program for people to use?

To try and explain what I'm asking, I'll give an example. I'm writing a program in C# that does something, then someone wants to try it out. Rather than send them the folder with my source code, debug folders, etc. along with my .exe, how would I package everything so that someone could download it and then just click on the program icon and run it?

I have some experience with C++, C#, XAML, SQL, Java, PLC programming, assembly, but I still haven't learned anything about packaging programs so far.. >.<Which sort of makes me feel like a moron. Hopefully this was the correct place to post this question, thanks in advance.

### CompsciOverflow

#### Proving correctness of an AVL-Tree colouring algorithm

I came up with the following recursive algorithm to colour the nodes of an AVL tree so that the resulting tree is red-black. The logic is that the algorithm first colours the root and, recursively, the left and right subtrees black. Since it is an AVL tree, if the root has height h, one path to a leaf takes distance h, and another is h-1. If h is odd the two subtrees from the root have heights h-1 (even) and h-2 (odd), with the even subtree having an extra black node, hence to be coloured red.

(pseudocode)
colour_tree(root(T)):
r <- root(T)
if r = null:
return
colour(r) = "black"

colour_tree(right(r))
colour_tree(left(r))

if isodd(height(r)):
if right(r) != null and iseven(right(r)):
colour(right(r)) = "red"
if left(r) != null and iseven(left(r)):
colour(left(r)) = "red"


I am having trouble starting on a proof of correctness for this algorithm. I know that for any tree with height h, there are at least $\lceil (h+1)/2 \rceil$ blacknodes, and intuitively it seems like I'd have to perform induction on the height, but I am not sure where to start. Thanks.

### /r/emacs

#### Active cursor not changing appearance in OSX Yosemite

Normally, with more than a single window, I have different cursor appearances depending on which cursor has focus. The active cursor is a solid rectangular block and the inactive cursor is a rectangle without fill. I sort of assume this is the default behaviour, but might be wrong.

Now when I'm fullscreen in the new OSX, moving back to emacs from other spaces causes the cursor to be active (I can move and edit with it normally), but it doesn't change its appearance anymore, it stays as an empty rectangle. Only after clicking on Emacs with the mouse does the cursor fill properly. This is kind of ruining my usage of emacs, because with two windows open I can't visually tell which cursor I'm editing with (until I click somewhere with the mouse, or do some garbage input).

Anyone else getting this? It appears to be Yosemite, not 24.3/24.4.

submitted by mrmagooey

### /r/compsci

#### Computer Science in two different departments?

The university I'm considering has 2 different campuses that offer Computer Science but one lists it in the Math department and the other in Natural Sciences. Should I expect any notable differences?

submitted by Zwolfer

### QuantOverflow

#### What would be considered a good/competitive throughput for a FIX engine?

I am writing my own FIX engine and I am in the process of running some benchmarks. I am not sure whether my results are good or bad. Can someone with experience in the area provide me with some benchmark throughput numbers? How many FIX messages should my client be able to process per second?

### Fefe

#### Nicht News: Cop tritt einem Verdächtigen.News: Der ...

Nicht News: Cop tritt einem Verdächtigen.

News: Der Verdächtige war auch ein Cop. Kleine Verwechselung, kann ja mal passieren!1!!

#### Bug des Tages: strings auf ein Binary ausführen kann ...

Bug des Tages: strings auf ein Binary ausführen kann strings crashen. Nach mehr sieht mir das im Moment nicht aus, aber, mal man soll ja den Tag nicht vor dem Abend loben.

Ich erwähne das und mache es zum Bug des Tages, weil das so eine der ersten Sachen ist, die man in der Computer-Forensik macht. strings drauf laufen lassen. Und, wie sich rausstellt, verwendet GNU strings libbfd von binutils und parsed ELF-Binaries. Wenn da kaputte Header drin sind, dann crasht halt strings. Das ist für Forensik-Anwendungen natürlich absolut überhaupt gar nicht zu verteidigen. Schon weil man sich jetzt die Frage stellen muss, ob da noch mehr als ein Crash rauszuholen ist, aus dem ranzigen libbfd-Parsing-Code. Na? Wer will das mal auditieren? Ich hab mir ja diesen ELF-Kram mal ein bisschen angeguckt, für dietlibc, und habe nur drei Kreuze gemacht, als mein Code endlich tat. ELF ist ein sehr gruseliger Standard.

### StackOverflow

#### How can I alias a covariant generic type parameter

The following code does not compile (in Scala 2.11):

case class CovariantClass[+R](value: R) {
type T = R
def get: R = value
}

object Main {
def main(args: Array[String]): Unit ={
println(CovariantClass[String]("hello").get)
}
}


The error message is:

Error:(4, 8) covariant type R occurs in invariant position in type R of type T
type T = R
^


Why can't I alias a covariant type parameter? If I remove the line type T = R, the code compiles and prints hello, so the alias seems to be the problem. Unfortunately, this means that I cannot create an alias for more complex types, e.g., type T = List[R] does not compile either, although List is covariant.

### CompsciOverflow

#### When is counting coins NP-complete?

having a bit of an issue with this question and deciding which of these situations requires dynamic programming and which are NP-complete:

All three (except the last one) ask how much goes to person A and B if they split the amount down the middle :

A. There are N coins of an arbitrary number of different denominations, but each denomination is a power of 2 (I.E. 2 cents, 4 cents, etc.)

B. There are N bills of arbitrary denominations.

C. Same as B but now person A and B want to split the tips in two unequal parts provided the difference between them is less than 10 (I.E. |A| - |B| < 10).

### Planet FreeBSD

#### PC-BSD 10.1-RC1 Released

The PC-BSD team is pleased to announce the availability of RC1 images for the upcoming PC-BSD 10.1 release.

PC-BSD Notable Changes

* KDE 4.14.2
* GNOME 3.12.2
* Cinnamon 2.2.16
* Chromium 38.0.2125.104_1
* Firefox 33.0
* NVIDIA Driver 340.24
* Lumina desktop 0.7.0-beta
* Pkg 1.3.8_3
* New AppCafe HTML5 web/remote interface, for both desktop / server usage
* New CD-sized text-installer ISO files for TrueOS / server deployments
* New Centos 6.5 Linux emulation base
* New HostAP mode for Wifi GUI utilities
* Misc bug fixes and other stability improvements

TrueOS

Along with our traditional PC-BSD DVD ISO image, we have also created a CD-sized ISO image of TrueOS, our server edition.

This is a text-based installer which includes FreeBSD 10.0-Release under the hood. It includes the following features:

* ZFS on Root installation
* Boot-Environment support
* Command-Line versions of PC-BSD utilities, such as Warden, Life-Preserver and more.
* Support for full-disk (GELI) encryption without an unencrypted /boot partition

Updating

A testing update is available for 10.0.3 users to upgrade to 10.1-RC1. To apply this update, do the following:

edit: /usr/local/share/pcbsd/pc-updatemanager/conf/sysupdate.conf (As root)

to:

% sudo pc-updatemanager check

This should show you a new “Update system to 10.1-RELEASE” patch available. To install run the following:

% sudo pc-updatemanager install 10.1-update-10152014–10

NOTICE

As with any major system upgrade, please backup important data  and files beforehand!!!

This update will automatically reboot your system several times during the various upgrade phases, please expect it to take between 30–60 minutes.

Getting media

10.1-RC1 DVD/USB media can be downloaded from here via HTTP or Torrent.

Reporting Bugs

Found a bug in 10.1? Please report it (with as much detail as possible) to our bugs database.

### Planet Emacsen

#### Endless Parentheses: Aggressive-indent just got better!

aggressive-indent is quite something. I've only just released it and it seems to have been very well received. As such, it's only fair that I invest a bit more time into it.

The original version was really just a hack that was born as an answer on Emacs.SE. It worked phenomenally well on emacs-lisp-mode (to my delight), but it lagged a bit on c-like modes.

The new version, which is already on Melpa, is much smarter and more optimised. It should work quite well on any mode where automatic indentation makes sense (python users, voice your suggestions).

As a bonus, here's a stupendous screencast, courtesy of Tu Do!

## Usage

Instructions are still the same! So long as you have Melpa configured, you can install it with.

M-x package-install RET aggressive-indent

Then simply turn it on and you’ll never have unindented code again.

(global-aggressive-indent-mode)

You can also turn it on locally with.

(add-hook 'emacs-lisp-mode-hook #'aggressive-indent-mode)

Comment on this.

### Planet Clojure

#### Planjure: A* and Dijkstra's in Om

I wrote Planjure to learn ClojureScript and Om. It's a fun little program. You paint islands on a canvas and run path-planning algorithms to find the optimal path for your ship to to traverse from start to finish. Sailing across blue ocean is faster than hiking through green islands. The three algorithms implemented are Dijkstra’s, A* and depth-first.

My favorite feature is Visited mode. When you enable Visited, you'll be able to see the nodes that the algorithm had to visit in its search for the optimal path. A node is visited only if the algorithm thinks that an optimal path may use the node. An unvisited node implies that the algorithm has found a faster path that does not require that node.

In the example below, Dijkstra’s algorithm has to check almost every single node except the center of the island (because trekking across the island is more difficult than sailing around it).

But when we run A*, we find that a lot of the nodes are ignored! This is because A* has a better heuristic for determining if a node should be visited or not.

Here's one more with a more complex world:

Play around with Planjure here, and read the source code here.

## October 24, 2014

### UnixOverflow

#### How can I peek at the output of a running crontab task on OpenBSD?

I have an hourly hour-long crontab task running with some mtr (traceroute) output every 10 minutes (that is going to go for over an hour prior to it being emailed back to me), and I want to see the current progress thus far.

On Linux, it can be done by accessing the open fd of the temporary file to which the results of the script are saved.

How can I do this on OpenBSD?

I've tried doing fstat | fgrep -e USER -e cron -e mtr, but couldn't find any temporary files at all.

### StackOverflow

#### How to add a method to Enumeration in Scala?

In Java you could:

public enum Enum {
ONE {
public String method() {
return "1";
}
},
TWO {
public String method() {
return "2";
}
},
THREE {
public String method() {
return "3";
}
};

public abstract String method();
}


How do you do this in Scala?

### QuantOverflow

#### Trader's identity in a limit book

In a limit book like NASDAQ ITCH, can liquidity suppliers know the demand-side identity of a trader prior or after a trade? Knowing this will help me with my theoretical model that I am trying to develop since I am wondering whether a liquidity supplier that observes say a trade of 90 shares that takes away liquidity up to three ticks (30 shares per tick - block shape book) can distinguished whether the trade came from one individual or three individuals who submitted trades at each tick.

### StackOverflow

#### Trying to use Play Framework

Not sure how to describe this error, but I'll try. I'm trying to learn to use the Play Framework for a project I'm collaborating on, and a few hours into but having trouble.

After creating a new project with activator, entering the directory and then typing activator, I then type "play" and get this output:

➜  ix  activator
[info] Updating {file:/Users/ace/Projects/ix/project/}ix-build...
[info] Resolving org.fusesource.jansi#jansi;1.4 ...
[info] Done updating.
[info] Set current project to ix (in build file:/Users/ace/Projects/ix/)

Hey,

Recently upgraded to 24.4 and I no longer appear to be able to insert a dollar symbol while in latex-mode. I can press M-1 $and it'll insert a dollar, or I can type \$\$and then delete the slashes. It gives me the error: symbol's function definition is void: texmathp Which I don't really understand. I appear to be using the latest version of AUCTeX. Any help appreciated. Thanks! submitted by uint64 [link] [2 comments] ### StackOverflow #### Real SBT Classpath at Runtime I have some test cases that need to look at the classpath to extract the paths of some files/directories in there. This works fine in the IDE. The problem is that, when running SBT test, Properties.javaClassPath gives me /usr/share/sbt-launcher-packaging/bin/sbt-launch.jar. The classpath is fine when I run show test:dependency-classpath. Is there a way to obtain that information from inside the running Scala/Java program? Or is there a way to toss it into a system property or environment variable? #### StackOverflowError for mergesort in scala I have implemented the below mergesort code. But I get stackOverFlowError in the merge procedure of the algorithm when the no of integers is as high as 100000. I'm using pattern matching with recursion for the merge procedure. I understand using recursion for the merge procedure is not optimal given that the depth for this input would be as high as 50000. But since I'm using scala I was expecting some compiler optimization to make the recursive calls iterative since these are tail recursive calls. Can you please help me understand why I still get StackOverFlowerror in the below code? Please provide input on how I can write this more effeciently in scala? Below is the code: package common object Merge { def main(args: Array[String]) = { val source = scala.io.Source.fromFile("IntegerArray.txt") val data = source.getLines.map {line => line.toInt}.toList println(data.length) val res = mergeSort(data) println(res) } def mergeSort(data: List[Int]): List[Int] = { if(data.length <= 1) {data } else { val mid = (data.length)/2 val (l, r) = data.splitAt(mid) val l1 = mergeSort(l) val l2 = mergeSort(r) merge(l1, l2) } } def merge(l: List[Int], r: List[Int]): List[Int] = { l match { case List() => r case x::xs => { r match { case List() => l case y::ys => { if(x<y) { x :: merge(xs, r) } else { y :: merge(l, ys) } } } } } } }  Below is the exception I get: Exception in thread "main" java.lang.StackOverflowError at common.Merge$.merge(Merge.scala:30)
at common.Merge$.merge(Merge.scala:30) at common.Merge$.merge(Merge.scala:30)
at common.Merge$.merge(Merge.scala:32) at common.Merge$.merge(Merge.scala:32)
at common.Merge$.merge(Merge.scala:30) at common.Merge$.merge(Merge.scala:32)
at common.Merge$.merge(Merge.scala:30) at common.Merge$.merge(Merge.scala:32)
at common.Merge$.merge(Merge.scala:32) at common.Merge$.merge(Merge.scala:30)
at common.Merge$.merge(Merge.scala:30) at common.Merge$.merge(Merge.scala:32)
at common.Merge$.merge(Merge.scala:32) at common.Merge$.merge(Merge.scala:32)
at common.Merge$.merge(Merge.scala:30) at common.Merge$.merge(Merge.scala:32)
at common.Merge$.merge(Merge.scala:32) at common.Merge$.merge(Merge.scala:32)
at common.Merge$.merge(Merge.scala:30)  ### Fefe #### Don Alphonso über "das Aufschrei-Netzwerk". Ich empfehle ... Don Alphonso über "das Aufschrei-Netzwerk". Ich empfehle das nicht wegen des Nachrichtenwerts oder weil da neue Dinge berichtet werden. Aber alleine die Liste an menschenverachtenden Tweets, die er da verlinkt, die hat mich gerade ein bisschen sprachlos gemacht. Ich habe ja keinen Twitter-Account und halte mich im Allgemeinen auch aus Webforen und Imageboards raus, weil ich keinen Bock auf sich gegenseitig anbrüllende Proleten habe. Daher hatte ich auch keine Vorstellung, wie wörtlich der Begriff "Shitstorm" die Realität beschreibt. Was einem da an blankem Hass im Namen des Feminismus entgegen schwappt, das hat mich gerade ziemlich kalt erwischt. Ich bin wie gesagt in der Debatte nicht so drin, weil ich mich da auch absichtlich raushalte. Das liegt vor allem daran, dass mir gleich schon die erste Seite, deren Argumente ich mir anhören wollte, Vorschriften machen wollte, welche und wessen Argumente ich anhören darf und welche nicht. Wenn das passiert, mache ich normalerweise einen großen Bogen um das ganze Thema und nehme keinen der Teilnehmer mehr ernst. Es gibt da so einen schönen Spruch, der George Bernard Shaw zugeschrieben wird. I learned long ago, never to wrestle with a pig. You get dirty, and besides, the pig likes it. Das gilt eben nicht nur für Wrestling mit Schweinen sondern auch für Internet-Diskurse in Webforen, und bei Twitter ist es offensichtlich noch mal eine Packung schlimmer. Es ist übrigens auch im Leben außerhalb des Internets eine bewährte Maxime. Ich nehme ja hier im Blog auch selten ein Blatt vor den Mund, aber das heißt nicht, dass man andere Menschen so persönlich anpinkeln muss. Das ist ja eine Sache, das mit Politikern zu machen, die in meinem Namen Dinge tun, die ich nicht gutheiße. Das ist nochmal eine völlig andere Sache, einfach so im Internet irgendwelche wildfremden Leute mit Scheiße zu bewerfen. Also, in diesem Sinne. Wir sind keine steinzeitlichen Höhlenbewohner mehr. Benehmt euch mal bitte entsprechend. Respekt an dieser Stelle übrigens für die Ausdauer von Don Alphonso. Die Energie hätte ich nicht gehabt, mich durch diese Fäkalienlavine durchzuarbeiten, um da das Best-Of verlinken zu können. Übrigens, am Rande: Kennt ihr schon die Non-White-Heterosexual-Male License? #### EuGH: Das Einbetten von Youtube-Videos verstößt nicht ... EuGH: Das Einbetten von Youtube-Videos verstößt nicht gegen das Urheberrecht. Es sagt viel über unser Rechtssystem aus, dass für diese Erkenntnis überhaupt der EuGH angerufen werden musste. ### Halfbakery #### Pictures of You (0.5) ### StackOverflow #### Blocking code wrapped inside Akka Future also blocks the thread backed by Future, then how is future helpful in this case The philosophy behind Akka/Scala Future is that when ever we find a blocking piece of code such as IO call , network call etc we have to wrap it up inside a future and asynchronously get the result after some point of time. But, the blocking piece of code that was blocking the main thread earlier is now blocking in the separate thread that Future is backed by. then what did Akka/Scala Future buy us. val blockingCallResult: Result = block() //blocks the thread of execution. now let's use Akka/Scala future and wrap the blocking call with Future val future = Future[Result] { val blockingCallResult: Result = block() //also blocks on some thread in thread pool blockingCallResult }  How are we getting benefited by using the future. ### /r/compsci #### What's the name for Sorting Algorithms that utilize probability? Say for instance you know that you're going to be sorting numbers within a certain interval and you know something about their distribution (uniform, gaussian...what have you) and you use that to your advantage when inserting numbers into a list. What are these algorithms referred to and how efficient are they? Thanks a bunch! submitted by Syntaximus [link] [4 comments] ### StackOverflow #### install zeromq for logstash in Solaris 5.10 used this: http://zeromq.org/distro:debian to get zeromq in Ubuntu to use it with logstash. Want to achieve same within Solaris. Read that zeromq supports Solaris but only provides a tarball. How to apt-get for Zeromq in Solaris. ### CompsciOverflow #### from when do we have the field of computer vision? [closed] when was the field of computer vision created and what are the disciplines that gave birth to it? I would appreciate it if you could give some web references that have the answer. ### /r/compsci #### How to make an asynchronous logic circuit for a 3-bit up counter which counts only even numbers? Counting like 010,100,110,010,100,110,010... submitted by saabr [link] [3 comments] ### CompsciOverflow #### Order of storage of pointers for a linked list of length n I have a linked list in which I have kept the elements in order (increasing/decreasing). I want to be able to perform a binary search on this linked list. For this I am keeping pointers to the middle element of list of length n, n/2, n/4 so on. I am not able to figure out what would be the order of storage in this case. Any ideas? ### Lambda the Ultimate Forum #### Whither Flow Analysis? In the comments to a post about CSE in Guile, NeelK mentions we need more Flow Analysis. The other day I was reading A Flow-Analysis Framework for Realistic Scheme Programs which was done in Scheme48. GHC does some fusiony stuff. What is the state of the art? Who is working on it? When will it come to a compiler / javascript interpreter near me? ### CompsciOverflow #### Can a transcendental number like$e$or$\pi$be compressed as not algorithmically random? The related and interesting fields of Information Theory, Turing Computability, Kolmogorov Complexity and Algorithmic Information Theory, give definitions of algorithmically random numbers. An algorithmically random number is a number (in some encoding, usually binary) for which the shortest program (e.g using a Turing Machine) to generate the number, has the same length (number of bits) as the number itself. In this sense numbers like$\sqrt{e}$or$\pi$are not random since well known (mathematical) relations exist which in effect function as algorithms for these numbers. However, especially for$e$and$\pi$(which are transcendental numbers) it is known that they are defined by infinite power series. For example$e = \sum_{n=0}^\infty \frac{1}{n!}$So even though a number, which is the binary representation of$\sqrt{e}$, is not alg. random, a program would (still?) need the description of the (infinite) bits of the (transcendental) number$e$itself. Can transcendental numbers (really) be compressed? Where is this argument wrong? UPDATE: Also note the fact that for almost all transcendental numbers, and irrational numbers in general, the frequency of digits is uniform (much like a random sequence). So its Shannon entropy should be equal to a random string, however the Kolmogorov Complexity, which is related to Shannon Entropy, would be different (as not alg. random) Thank you ### /r/netsec #### Unbreakable filter ### CompsciOverflow #### If P != NP, then 3-SAT is not in P I hope I'm in the right section: I know that if P = NP, then 3-SAT can be solved in P (Cook), but is the opposite valid, too? If P != NP, then 3-SAT is not in P? Thanks! #### Operating Systems Qualifying Exam Visual Study Aids I have read the Operating System Concepts book, watched a lot of videos on YouTube, practiced with past qualifying exam questions and several exams from other CS departments around the world. Although I feel better about going into the exam, I have been searching for: • Large diagram that puts most of OS concepts in perspective • Video that shows the lifecycle of a process, etc… I can undersand full-well why such a digram may not exist. It is possible to propose a case of a concept then diagram its "life-cycle"... but I havent found one. Has anyone see anything like this? I am a very visual person and things make more sense to me when I can express them in a diagram. At this point I feel that I will draw the diagram myself, and this way I can test my knowledge and also share it with others. But before I do that, I want to make sure I am not re-inventing the wheel. If you are going to down-vote my question, please leave a comment also. ### StackOverflow #### Option.fold - why is its second argument not a binary operator? I'm sure there's a good reason for this, but I'm not seeing it. Fold on (say) List returns the result of applying fold operator op between all the elements and z It has an obvious relationship with foldLeft and foldRight that do the same thing but with defined order (and so don't need associative operators) Fold on Option returns Returns the result of applying f to this scala.Option's value if the scala.Option is nonempty. Otherwise, evaluates expression ifEmpty. ifEmpty is (in the position of) the z for a List. f is (in the position of) the op For None (which, using my mental model of Option as a "container" that may or may not contain a value, is an "empty" container), things are OK, Option.fold returns the zero (value of ifEmpty). For Some(x), though, shouldn't f take two params, z and x so it's consistent with the fold on sequences (including having a similar relationship to foldLeft and foldRight). There's definitely a utility argument against this - having f just take x as a parameter in practice probably more convenient. In most cases, if it did also take z that would be ignored. But consistency matters too... So can someone explain to me why fold on Option is still a "proper" fold? ### Planet Clojure #### Elastisch 2.1.0-beta9 is released ## TL;DR Elastisch is a battle tested, small but feature rich and well documented Clojure client for ElasticSearch. It supports virtually every Elastic Search feature and has solid documentation. 2.1.0-beta1 is a preview release of Elastisch 2.1 which introduces a minor feature. ## Changes between Elastisch 2.1.0-beta8 and 2.1.0-beta9 ### Ability to Specify Aliases In index.create-template clojurewerkz.elastisch.rest.index.create-template now supports the :aliases option: Contributed by Jeffrey Erikson. ## Changes between Elastisch 2.1.0-beta7 and 2.1.0-beta8 ### clj-http Update clj-http dependency has been upgraded to version 1.0.x. ### Allow Retry On Conflict Option Updates and upserts now allow the retry-on-conflict option to be set. This helps to work around Elasticsearch version conflicts. GH issue: #119. Contributed by Michael Nussbaum (Braintree). ## Changes between Elastisch 2.1.0-beta6 and 2.1.0-beta7 ### REST API Bulk Indexing Filters Out Operation Keys clojurewerkz.elastisch.rest.bulk/bulk-index now filters out all operation/option keys so that they don’t get stored in the document body. GH issue: #116. Contributed by Michael Nussbaum (Braintree). ## Full Change Log Elastisch change log is available on GitHub. ## Thank You, Contributors Kudos to Michael Nussbaum and Jeffrey Erikson for contributing to this release. ## Elastisch is a ClojureWerkz Project Elastisch is part of the group of libraries known as ClojureWerkz, together with • Langohr, a Clojure client for RabbitMQ that embraces the AMQP 0.9.1 model • Monger, a Clojure MongoDB client for a more civilized age • Cassaforte, a Clojure Cassandra client • Titanium, a Clojure graph library • Neocons, a client for the Neo4J REST API • Welle, a Riak client with batteries included • Quartzite, a powerful scheduling library and several others. If you like Elastisch, you may also like our other projects. Let us know what you think on Twitter or on the Clojure mailing list. ## About the Author Michael on behalf of the ClojureWerkz Team ### TheoryOverflow #### When is counting coins NP-complete? [on hold] having a bit of an issue with this question and deciding which of these situations requires dynamic programming and which are NP-complete: All three (except the last one) ask how much goes to person A and B if they split the amount down the middle : A. There are N coins of an arbitrary number of different denominations, but each denomination is a power of 2 (I.E. 2 cents, 4 cents, etc.) B. There are N bills of arbitrary denominations. C. Same as B but now person A and B want to split the tips in two unequal parts provided the difference between them is less than 10 (I.E. |A| - |B| < 10). ### Planet Clojure #### Meltdown 1.1.0 is released ## TL;DR Meltdown is a Clojure interface to Reactor, an asynchronous programming, event passing and stream processing toolkit for the JVM. 1.1.0 is a minor release that updates Reactor to the most recent point release. ## Changes between 1.0.0 and 1.1.0 ### Reactor Update Reactor is updated to 1.1.x. ## Change log Meltodwn change log is available on GitHub. ## Meltdown is a ClojureWerkz Project Meltdown is part of the group of libraries known as ClojureWerkz, together with • Langohr, a Clojure client for RabbitMQ that embraces the AMQP 0.9.1 model • Elastisch, a Clojure client for ElasticSearch • Monger, a Clojure MongoDB client for a more civilized age • Cassaforte, a Clojure Cassandra client • Titanium, a Clojure graph library • Neocons, a client for the Neo4J REST API • Quartzite, a powerful scheduling library and several others. If you like Meltdown, you may also like our other projects. Let us know what you think on Twitter or on the Clojure mailing list. ## About the Author Michael on behalf of the ClojureWerkz Team ### Lobsters #### MIT computer scientists can predict the price of Bitcoin ### QuantOverflow #### Statistical arbitrage using eigen portfolios I was trying to understand below paper https://www.math.nyu.edu/faculty/avellane/AvellanedaLeeStatArb071108.pdf Page 20 explains about "Entering a trade". I wan't to know clearly what it means to place a long trade in case of arbitrage using eigen portfolios. Greatly appreciate your inputs ### /r/clojure #### Analysis of the State of Clojure and ClojureScript Survey 2014 ### TheoryOverflow #### Why is shifting bits different from shifting qubits? In classical circuit complexity, shifting bits is considered gratis; all you have to do is reorganizing wires between corresponding gates. By contrast, shifting qubits is typically done by using a series of quantum SWAP gates, which are composed of three CNOT gates respectively; the shifting contributes to the total circuit complexity. So where does the difference come from? Is it due to the physical implementation of qubits? ### StackOverflow #### Ansible remote provisioning vagrant I'm using an ansible remote server to provision my production server, that works well. Now I thought about using this ansible server to provision my vagrant VMs. Is this possible somehow? I thought about a shell script provision for the vagrant file that logs into the ansible server via ssh and executes the playbook command towards the VM on the local machine. I don't have too much experience with shell scripts. Has anybody tried this or can tell me a better way to do it? #### Compile error using slick MappedColumnType for static query I have the following mapper so that I can use Joda DateTime values in my Slick models and queries: import java.sql.Timestamp import org.joda.time.DateTime import scala.slick.driver.MySQLDriver.simple._ object Mappers { implicit def joda = MappedColumnType.base[DateTime, Timestamp]( dt => new Timestamp(dt.getMillis), ts => new DateTime(ts.getTime) ) }  The table classes that I have defined containing a DateTime field appear to compile fine by importing this. However a static query like this will not: sql"""select s.expiresAt from tablename s limit 1""".as[DateTime].first  I get this error:  could not find implicit value for parameter rconv: scala.slick.jdbc.GetResult[org.joda.time.DateTime]  What do I need to add to make this work? #### Slick: Option column filtering I want to do something like this (this is a made-up example to simplify my actual problem): def findByGender(isMale: Option[Boolean]) = { People.filter(row => row.name.isNotNull && isMale match { case Some(true) => row.wife.isNotNull // find rows where wife column is not null case Some(false) => row.wife.isNull // find rows where wife column is null case None => true // select everything }) }  This does not compile because of the last "true". Any better way to do this? ### Fefe #### Die gute Nachricht: Jesus kommt wieder!Die schlechte ... Die gute Nachricht: Jesus kommt wieder! Die schlechte Nachricht: Als Python. #### Der Europäische Gerichtshof für Menschenrechte hat ... Der Europäische Gerichtshof für Menschenrechte hat gegen Deutschland geurteilt. Es ging um einen Deutschen, der wegen Drogenhandels verurteilt worden war. Der war aber zu der Tat überhaupt erst von verdeckten Ermittlern angestiftet und überredet worden. Das wertete der EGMR als Verletzung seines Rechts auf ein faires Verfahren. Money Quote: Danach darf vor Gericht kein Beweis genutzt werden, der durch Anstiftung von Polizisten erlangt wurde. Der Mann kriegt jetzt ein paar Euro Entschädigung (bisschen wenig, finde ich, aber immerhin), aber das wichtige an dem Fall ist natürlich, dass es jetzt dieses Grundsatzurteil gibt. Die Frage ist natürlich, wie das in Einzelfall nachweisbar ist, wer da wen wann wozu überredet hat. #### In einer Umfrage haben sich fast 75% der Dänen für ... In einer Umfrage haben sich fast 75% der Dänen für ein Beschneidungsverbot ausgesprochen. Zwei dänische Parteien fordern ein Verbot, bei den anderen läuft die Entscheidungsfindung noch. ### /r/clojure #### Lambda Pumpkin ### StackOverflow #### Calling a function with TypeTag recursively I'm playing with Scala TypeTag. I want to recursively call a function with a TypeTag parameter. Here is a simplified example of what I'm trying to do: import scala.reflect.runtime.universe._ object TypeTagTest extends App { def intValue[T](value: T)(implicit tag: TypeTag[T]): Int = { tag.tpe match { // integer case intType if intType <:< typeOf[Int] => value.asInstanceOf[Int] // string case stringType if stringType <:< typeOf[String] => value.asInstanceOf[String].toInt // option of either string or integer case optionType @ TypeRef(_, _, typeArg::Nil) if optionType <:< typeOf[Option[_]] => println(s"Unwrapped type is$typeArg")
val option = value.asInstanceOf[Option[_]]
option.map { optionValue =>
// how to pass the typeArg here?
intValue(optionValue)
}.getOrElse(0)
}
}

println(intValue(1))
println(intValue("1"))
println(intValue(Some("1")))

}


This code compiles and runs:

1
1
Exception in thread "main" scala.MatchError: Any (of class scala.reflect.internal.Types$TypeRef$$anon6) at TypeTagTest.intValue(TypeTagTest.scala:7) at TypeTagTest$$anonfun$intValue$2.apply(TypeTagTest.scala:19) at TypeTagTest$$anonfunintValue2.apply(TypeTagTest.scala:18) at scala.Option.map(Option.scala:145)  Couple of questions: 1. How to pass the type information when the recursive call is made? 2. Is there a way to make this pattern matching a little less ugly? #### Akka cluster-sharding: Can Entry actors have dynamic props Akka Cluster-Sharding looks like it matches well with a use case I have to create single instances of stateful persistent actors across Akka nodes. I'm not clear if it is possible though to have an Entry actor type that requires arguments to construct it. Or maybe I need to reconsider how the Entry actor gets this information. Object Account { def apply(region: String, accountId: String): Props = Props(new Account(region, accountId)) } class Account(val region: String, val accountId: String) extends Actor with PersistentActor { ... }  Whereas the ClusterSharding.start takes in a single Props instance for creating all Entry actors. val counterRegion: ActorRef = ClusterSharding(system).start( typeName = "Counter", entryProps = Some(Props[Counter]), idExtractor = idExtractor, shardResolver = shardResolver)  And then it resolves the Entry actor that receives the message based on how you define the idExtractor. From the source code for shard it can be seen it uses the id as the name for a given Entry actor instance: def getEntry(id: EntryId): ActorRef = { val name = URLEncoder.encode(id, "utf-8") context.child(name).getOrElse { log.debug("Starting entry [{}] in shard [{}]", id, shardId) val a = context.watch(context.actorOf(entryProps, name)) idByRef = idByRef.updated(a, id) refById = refById.updated(id, a) state = state.copy(state.entries + id) a }  } It seems I should instead have my Entry actor figure out its region and accountId by the name it is given, although this does feel a bit hacky now that I'll be parsing it out of a string instead of directly getting the values. Is this my best option? ### /r/clojure #### Very excited to present my work on the automatic project scheduling library I did for Clojure Cup...It has come a long way! ### /r/systems #### Compressed full-text indexes [PDF, 2007] ### /r/clojure #### Understanding Clojure's Persistent Vectors, pt. 3 ### StackOverflow #### Playframework: How to authenticate with play2-auth on a POST and not lose data I'm using Playframework 2.3 and play2-auth for authentication and authorization. I have a website with two routes: GET /claim MyController.claim POST /claim MyController.submitClaim  Then my controller looks like: def claim = Action { implicit request => Ok(views.html.forms.claimFormPage(claimForm)) } def submitClaim = StackAction(AuthorityKey -> Member) { implicit request => claimForm.bindFromRequest.fold( formWithErrors => BadRequest(formWithErrors), data => { // save to DB // redirect } ) }  So GET /claim is accessible to anybody and is a simple page with a form on it, but POST /claim requires you to be signed in. I want the behavior to be like this: • Anybody can go to /claim and fill out the form. • When the user submits the form, IF they are not already logged in, it asks them to log in. Then on successful login, it finishes submitting the form (i.e. invokes my submitClaim controller method with correct post data). • If they are already logged in when they try to submit the form, nothing happens that the user would notice. It just works. But unfortunately, that is not working. What happens instead is: • If they are already logged in when they submit the form everything works perfectly • If they are not already logged in when they submit the form, my app asks them to log in, but then on successful login, my controller method claim is invoked instead of submitClaim like it is supposed to. My AuthConfig (which is where I assume the change needs to be) is straight out of the play2-auth documentation and looks like this: trait AuthConfigImpl extends AuthConfig { // Other settings are omitted. def loginSucceeded(request: RequestHeader)(implicit ctx: ExecutionContext): Future[Result] = { val uri = request.session.get("access_uri").getOrElse(routes.Message.main.url.toString) Future.successful(Redirect(uri).withSession(request.session - "access_uri")) } }  Thanks in advance! ### /r/systems #### Inverted Files Versus Signature Files for Text Indexing [PDF, 1998] #### Fast, Incremental Inverted Indexing in Main Memory for Web-Scale Collections [arvix, 2013] ### StackOverflow #### Testing if the static types of 2 definitions are equal Let's say I come up with a combinator: def optional[M[_]: Applicative, A, B](fn: Kleisli[M, A, B]) = Kleisli[M, Option[A], Option[B]] { case Some(t) => fn(t).map(_.some) case None => Applicative[M].point(none[B]) }  This combinator maps any Kleisli[M, A, B] to Kleisli[M, Option[A], Option[B]. However, after some time, I realize (admittedly with the help of estewei on #scalaz) this can be made to work with containers more general than just Option, namely anything for which there is a Traverse instance: def traverseKleisli[M[_]: Applicative, F[_]: Traverse, A, B](k: Kleisli[M, A, B]) = Kleisli[M, F[A], F[B]](k.traverse)  so that optional can now be defined as: def optional[M[_]: Applicative, A, B](fn: Kleisli[M, A, B]) = traverseKleisli[M, Option, A, B](fn)  However, I'd like to verify that at least the resulting type signature is equal to the original definition of optional, and whereas I could resort to hovering over both definitions in my IDE (Ensime in my case) and compare the response, I'd like a more solid way of determining that. I tried: implicitly[optional1.type =:= optional2.type]  but (obviously?) that fails due to both identifies being considered unstable by scalac. Other than perhaps temporarily making both of the functions objects with an apply method, are there any easy ways to compare their static types without resorting to relying on hints from IDE presentation compilers? P.S. the name optional comes from the fact that I use that combinator as part of a validation DSL to take a Kleisli-wrapped Validation[String, T] and turn it into a Kleisli-wrapped Validation[String, Option[T]] that verifies the validity of optional values if present. ### QuantOverflow #### Where can I find free historical market cap data? [duplicate] This question already has an answer here: I am looking to find free historical data for assorted companies listed on the TSX and TSX Venture. I can find daily closing prices (among a few other data fields) on Quandl, but I cannot find daily closing market capitalization values anywhere for free. I also found YCharts, and it has the data I'm looking for, but it is not free. Has anyone found this data? Also, why wouldn't this data be freely available like closing prices? This post provides some useful data sources, but none of them can provide this kind of information. ### /r/netsec #### Why I don't trust copy-paste ### /r/compsci #### please help, my class can't do it The New Telephone Company has the following rate structure for long distance calls; a) Any call started after 6:00 pm (1800 hours), but before 8:00 am (0800 hours) is discounted 50%. b) Any call started after 8:00 am (0800 hours), but before 6:00 pm (1800 hours) is charged full price.  c) All calls are subject to Goods and Services Tax (GST) (7%). d) The regular rate for a call is 0.40 per minute.  e) Any call longer than 60 minutes receives a 15% discount (after any other discount is subtracted and before the tax is added). Write a Turing program that asks for the starting time and finishing time for a phone call (Use a twenty-four hour clock). The gross cost (before any discounts or tax) should be printed followed by any discount(s) and the net cost (after discounts are deducted and tax is added). Your output is rounded to 2 decimal digits. submitted by burgerblaster [link] [3 comments] ### StackOverflow #### Better way to override a function definition I'm extending an abstract class from a library, and I want to override a function definition and use the superclass's definition as a fallback. If the parent class's method were defined as a PartialFunction, I could just use orElse. Since it isn't, I'm doing the thing below (unlifting the parent function into a partial function so I can use orElse). It works, but it is one of those times when I suspect there is a better / more elegant way. Is there? abstract class ThingICantChange { def test:Int=>String = { case 1 => "one" case 2 => "two" case _ => "unknown" } } class MyClass extends ThingICantChange { def myChanges:PartialFunction[Int,String] = { case 2 => "mytwo" case 3 => "three" } override def test = myChanges orElse Function.unlift(x => Some(super.test(x))) }  ### CompsciOverflow #### Write a Special Code on C# [on hold] I want to a Monitoring software for my user in one company. how i can wrote a program such a way non user be able to stop it in client? i.e, without any technique couldent stop my program or service? (my PL isC#) ### Planet Theory #### Harvard Junior Faculty Job Search Harvard Computer Science is doing a junior faculty search this year. We pretty much have been doing one every year, but it's nice to be able to make an announcement and point people to the ad. So here is the ad, telling you what you need so send in if you're applying. Also as usual, we're looking in all areas, because we're always interested in great people. However, in the interest of full disclosure, the focus this year is described as This is a broad faculty search and we welcome outstanding applicants in all areas of computer science, including applicants whose research and interests connect to such areas as engineering, health and medicine, or the social sciences. Of particular interest are candidates with a focus on systems and data science, including operating systems, networking, software engineering, data storage and management, and computational science. We'll look forward to your application. ### Lobsters #### Calories in, calories out ### TheoryOverflow #### Reconstructing a string from random samples What is known about the following problem? You're asked to reconstruct a string S of known length n over a known alphabet \Sigma from a collection of uniformly and independently chosen t-long subsequences of S. Recall that a t-long subsequence of S=\langle s_1,\ldots,s_n\rangle is a sequence \langle s_{\varphi(1)},\dots,s_{\varphi(t)}\rangle where \varphi is an non-decreasing function from \{1,\ldots,t\} to \{1,\ldots,n\}. ### StackOverflow #### Does Shapeless 2.0.0 lens work with Lists/collections? If I have this: case class Foo( age: Int, name: String, bar: List[Bar], alive: Boolean ) case class Bar( hey: String, you: String )  Can I create a lens that can get/set foo.bar(1).you? #### Returning parameterized List from Scala function I am getting error in the following code: def times(chars: List[Char]): List[Int] = { List(1) }  The error says the following: type mismatch; found : scala.collection.immutable.scala.collection.immutable.List[Int] required: < empty >.List[Int] Any idea why this error? ### /r/compsci #### [paper request] Connected operators and pyramids "Connected operators and pyramids", Jean C. Serra ; Philippe Salembier Proc. SPIE 2030, Image Algebra and Morphological Image Processing IV, 65 (June 23, 1993); doi:10.1117/12.146672 Text Size: A A A From Conference Volume 2030 Image Algebra and Morphological Image Processing IV Edward R. Dougherty; Paul D. Gader; Jean C. Serra San Diego, CA | July 11, 1993 Does anyone have this paper this paper? If so, a link to a pdf, or whatever, would be greatly appreciated. I wasn't able to find a (free) pdf via Google, so I thought I'd give reddit a try, hoping this is not regrettable. submitted by davini [link] [comment] ### StackOverflow #### Still not found object though import the package I created a java project and define some class, then export it as a jar file. In another java project I add the jar into build path by 'add external jars' in eclipse and import the package, everything works fine and I can use the class defined in the jar. However, when I trying to add the jar into my scala/Play framework by doing the same thing, but the compiler keep telling me 'not found : Object packageXXX'. What is the possible error? I did a clean of the project and tried multiple times, but still. ### Planet Clojure #### Analysis of the State of Clojure and ClojureScript Survey 2014 Yesterday, we posted the raw results from the 2014 State of Clojure and ClojureScript survey, where you can find not only the raw results but the methodology and other details of how it was conducted. I want to first thank Chas Emerick for having launched the survey back in 2010 and repeating it through 2013, and for reaching out to Alex to run it this year when he could not. As always, the purpose of the survey is to shed some light on how and for what Clojure and ClojureScript are being adopted, what is going well and what could stand improvement. ## What's the overview? We'll look at the individual questions below, but there are some demonstrable trends we can tease out of these responses. 1. Clojure (and ClojureScript) are seeing increasing use for commercial purposes, with noticeable growth on all measures where this survey tracks such things. From use on commercial products and services, to "I use it at work", we're seeing strong positive movement. 2. ClojureScript is coming along for the ride - even though it does not seem to have a substantial independent identity separate from Clojure, it is also seeing strong growth in commercial application. 3. The community is adding new users faster each year, which could imply accelerating growth (though remember that this is not a scientific survey). ## Let's look at the questions individually, first from the Clojure survey: How would you characterize your use of Clojure/ClojureScript/ClojureCLR today? The percentage of respondents using Clojure at work has nearly doubled since the 2012 survey (38% to 65%), which is the big news here. This would seem to comport with the changes we see in the domains question, and a continued sign of robust commercial adoption of the platform. In which domains are you applying Clojure/ClojureScript/ClojureCLR? Web development is still the top dog, and that helps explain the continued increase in usage of ClojureScript as well. What is significant is the jump in respondents working on commercial products and services (jumping from the low 20s to the low-to-mid 30s), while NoSQL and math/data analysis took a small tumble, essentially reversing positions with commercial development. Network programming is the only other thing to make a substantial move (dropping down about 10%). Really the takeway is that commercial development is gaining steadily, demonstrating a continued growth of Clojure in commercial settings, with a quite dramatic increase from 2012 (12% and 14%, respectively). How long have you been using Clojure? While the answer distribution has remained largely the same, the relative growth of the "Months" response (moving up a slot) matches up with other metrics, like Kovas Boguta's post analyzing GitHub metrics, to show a continued picture of accelerating growth in the development community. Do you use Clojure, ClojureScript, or ClojureCLR? The JVM platform clearly dominates, which is no shock. There is no cross-over in the responses between Clojure and ClojureCLR. However, a quite impressive 54.9% of respondents are also using ClojureScript, which is a measurable increase since 2013, though there is still no significant sign in the survey results of a ClojureScript-only userbase. This seems to imply a continuation of the theme from last time: ClojureScript is adopted (in growing numbers) by existing Clojure developers, not as an independent entity. We have not previously had ClojureCLR explicitly included in this survey. The recent release of Arcadia (ClojureCLR + Unity gaming engine) may have spurred some recent interest in the ClojureCLR platform. Thanks as always to David Miller's tireless efforts in this area. What is your *primary* Clojure/Script/CLR dev environment? Cursive (a Clojure IDE built on IntelliJ) is the big winner, jumping dramatically to second place. Interesting that Light Table saw absolute growth in both respondents and percentage, but still fell a spot due to Cursive's massive growth. While Emacs continues to dominate, it is great to see a vibrant collection of options here, to suit every developer's and team's needs. Which versions of Clojure do you currently use in production or development? Great to see that 1.6 dominates and it would seem that everyone is able to keep up with the new releases. A full 18% are using the 1.7 alphas in production or development already. What version of the JDK do you target? These answers comport with other survey results recently released, showing a rapid uptake of 1.8 at 49% of respondents. 1.7 is still the most common platform at 70%, and 1.6 is slowly fading at only 14%. Last year, 1.6 was still 19% of the sample, while 1.8 was a mere 5%. What tools do you use to compile/package/deploy/release your Clojure projects? Leiningen would now appear to be ubiquitous, at a whopping 98% of respondents using it (up from 75% last year). There isn't a significant change anywhere else. What has been most frustrating for you in your use of Clojure? There has been remarkably little motion in this list over the years. Staffing concerns, which jumped all the way to #2 last year, fell a spot this year, falling behind documentation/tutorials. It is interesting to note that #4, finding editing environments, remains steady even though there has been dramatic shifts and growth within the editor responses. Otherwise, hard to see any new trends here. Congratulations to everyone for "unpleasant community interactions" continuing to come in dead last. ## Next, let's look at the ClojureScript survey How would you characterize your use of ClojureScript today? Once again, we see a dramatic jump in usage at work - from 28% to 49% in just the last year. Serious hobby also climbed roughly 20%. It would appear that the rising tide floats all boats, as the entire Clojure ecosystem is seeing growth in commercial development use. Which JavaScript environments do you target? Browsers are now ubiquitous, being targeted by 97% of the community, with everything else on the list remaining largely unchanged. Which ClojureScript REPL do you use most often? Chas was quite distressed last year to note that more than a fourth of the respondents used no REPL at all. This year, that number is now almost a third. On the other hand, Austin took a major jump all the way to #2 at 22%, probably due to his commitment to it after last year's survey. Light Table also came from literally nowhere (not named on last year's survey) to occupy the third spot. In fact, it wasn't even included in the original responses until a bunch of people requested it be added, so it might be under-represented in the list. So even though even more people aren't using a REPL at all, the options seem to be growing. What has been most frustrating for you in your use of CLJS? Through the change in the question style, we can get a better picture of the real answers here. The difficulty of using a REPL jumps from 14% in 2013 to a whopping 68% this year, while debugging generated JavaScript rises from 14% to 43%. This is a much better window into how much pain those two items cause the community. It is impossible to tell because of this change in survey methodology, but the addition of CLJS source map support has most likely made that issue less difficult for many. ## The Takeaway While not a scientific survey, with five years of data it is really possible to spot trends. Clojure has clearly transitioned from exploratory status to a viable, sustainable platform for development at work. And as the community continues to add new users at an accelerating pace, we should only expect that trend to continue. Our thanks to everyone who took the time to fill out the survey - this kind of sharing is invaluable to the wider community. ### Twitter #### Breakout detection in the wild Nowadays, BigData is leveraged in every sphere of business: decision making for new products, gauging user engagement, making recommendations for products, health care, data center efficiency and more. A common form of BigData is time series data. With the progressively decreasing costs of collecting and mining large data sets, it’s become increasingly common that companies – including Twitter – collect millions of metrics on a daily basis [1, 2, 3]. Exogenic and/or endogenic factors often give rise to breakouts in a time series. Breakouts can potentially have ramifications on the user experience and/or on a business’ bottom line. For example, in the context of cloud infrastructure, breakouts in time series data of system metrics – that may happen due to a hardware issues – could impact availability and performance of a service. Given the real-time nature of Twitter, and that high performance is key for delivering the best experience to our users, early detection of breakouts is of paramount importance. Breakout detection has also been used to detect change in user engagement during popular live events such as the Oscars, Super Bowl and World Cup. A breakout is typically characterized by two steady states and an intermediate transition period. Broadly speaking, breakouts have two flavors: 1. Mean shift: A sudden jump in the time series corresponds to a mean shift. A sudden jump in CPU utilization from 40% to 60% would exemplify a mean shift. 2. Ramp up: A gradual increase in the value of the metric from one steady state to another constitutes a ramp up. A gradual increase in CPU utilization from 40% to 60% would exemplify a ramp up. The figure below illustrates multiple mean shifts in real data. Given the ever-growing number of metrics being collected, it’s imperative to automatically detect breakouts. Although a large body of research already exists on breakout detection, existing techniques are not suitable for detecting breakouts in cloud data. This can be ascribed to the fact that existing techniques are not robust in the presence of anomalies (which are not uncommon in cloud data). Today, we’re excited to announce the release of BreakoutDetection, an open-source R package that makes breakout detection simple and fast. With its release, we hope that the community can benefit from the package as we have at Twitter and improve it over time. Our main motivation behind creating the package has been to develop a technique to detect breakouts which are robust, from a statistical standpoint, in the presence of anomalies. The BreakoutDetection package can be used in wide variety of contexts. For example, detecting breakout in user engagement post an A/B test, detecting behavioral change, or for problems in econometrics, financial engineering, political and social sciences. How the package works The underlying algorithm – referred to as E-Divisive with Medians (EDM) – employs energy statistics to detect divergence in mean. Note that EDM can also be used detect change in distribution in a given time series. EDM uses robust statistical metrics, viz., median, and estimates the statistical significance of a breakout through a permutation test. In addition, EDM is non-parametric. This is important since the distribution of production data seldom (if at all) follows the commonly assumed normal distribution or any other widely accepted model. Our experience has been that time series often contain more than one breakout. To this end, the package can also be used to detect multiple breakouts in a given time series. How to get started Install the R package using the following commands on the R console: install.packages("devtools") devtools::install_github("twitter/BreakoutDetection") library(BreakoutDetection)  The function breakout is called to detect one or more statistically significant breakouts in the input time series. The documentation of the function breakout, which can be seen by using the following command, details the input arguments and the output of the function breakout. help(breakout)  A simple example To get started, the user is recommended to use the example dataset which comes with the packages. Execute the following commands: data(Scribe) res = breakout(Scribe, min.size=24, method='multi', beta=.001, degree=1, plot=TRUE) resplot  The above yields the following plot: From the above plot, we observe that the input time series experiences a breakout and also has quite a few anomalies. The two red vertical lines denote the locations of the breakouts detected by the EDM algorithm. Unlike the existing approaches mentioned earlier, EDM is robust in the presence of anomalies. The change in mean in the time series can be better viewed with the following annotated plot: The horizontal lines in the annotated plot above correspond to the approximate (i.e., filtering out the effect of anomalies) mean for each window. Acknowledgements We thank James Tsiamis and Scott Wong for their support and Nicholas James as the primary researcher behind this research. ### AWS #### Multi-AZ Support / Auto Failover for Amazon ElastiCache for Redis Like every AWS offering, Amazon ElastiCache started out simple and then grew in breadth and depth over time. Here's a brief recap of the most important milestones: • August 2011 - Initial launch with support for the Memcached caching engine in one AWS Region. • December 2011 - Expansion to four additional Regions. • March 2012 - The first of several price reductions. • April 2012 - Introduction of Reserved Cluster Nodes. • November 2012 - Introduction of four additional types of Cache Nodes. • September 2013 - Initial support for the Redis caching engine including Replication Groups with replicas for increased read throughput. • March 2014 - Another price reduction. • April 2014 - Backup and restore of Redis Clusters. • July 2014 - Support for M3 and R3 Cache Nodes. • July 2014 - Node placement across more than one Availability Zone in a Region. • September 2014 - Support for T2 Cache Nodes. When you start to use any of the AWS services, you should always anticipate a steady stream of enhancements. Some of them, as you can see from list above, will give you additional flexibility with regard to architecture, scalability, or location. Others will improve your cost structure by reducing prices or adding opportunities to purchase Reserved Instances. Another class of enhancements simplifies the task of building applications that are resilient and fault-tolerant. Multi-AZ Support for Redis Today's launch is designed to help you to add additional resilience and fault tolerance to your Redis Cache Clusters. You can now create a Replication Group that spans multiple Availability Zones with automatic failure detection and failover. After you have created a Multi-AZ Replication Group, ElastiCache will monitor the health and connectivity of the nodes. If the primary node fails, ElastiCache will select the read replica that has the lowest replication lag (in other words, the one that is the most current) and make it the primary node. It will then propagate a DNS change, create another read replica, and wire everything back together, with no administrative work on your side. This new level of automated fault detection and recovery will enhance the overall availability of your Redis Cache Clusters. The following situations will initiate the failover process: 1. Loss of availability in the primary's Availability Zone. 2. Loss of network connectivity to the primary. 3. Failure of the primary. Creating a Multi-AZ Replication Group You can create a Multi-AZ Cache Replication Group by checking the Multi-AZ checkbox after selecting Create Cache Cluster: A diverse set of Availability Zones will be assigned by default. You can easily override them in order to better reflect the needs of your application: Multi-AZ for Existing Cache Clusters You can also modify your existing Cache Cluster to add Multi-AZ residency and automatic failover with a couple of clicks. Things to Know The Multi-AZ support in ElastiCache for Redis currently makes use of the asynchronous replication that is built in to newer versions (2.8.6 and beyond) of the Redis engine. As such, it is subject to its strengths and weaknesses. In particular, when a read replica connects to a primary for the first time or when the primary changes, the replica will perform a full synchronization with the primary. This ensures that the cached information is as current as possible, but it will impose an additional load on the primary and the read replica(s). The entire failover process, from detection to the resumption of normal caching behavior, will take several minutes. Your application's caching tier should have a strategy (and some code!) to deal with a cache that is momentarily unavailable. Available Now This new feature is available now in all public AWS Regions and you can start using it today. The feature is offered at no extra charge to all ElastiCache users. -- Jeff; ### StackOverflow #### Scala - Recursive Patten Matching Using Either I currently have something that looks like this: data foreach { case Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(a))))))))))))))))))) => /* Do something */ case Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Right(a))))))))))))))))))) => /* Do something */ case Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Right(a)))))))))))))))))) => /* Do something */ case Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Right(a))))))))))))))))) => /* Do something */ case Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Right(a)))))))))))))))) => /* Do something */ case Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Right(a))))))))))))))) => /* Do something */ case Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Right(a)))))))))))))) => /* Do something */ case Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Right(a))))))))))))) => /* Do something */ case Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Right(a)))))))))))) => /* Do something */ case Left(Left(Left(Left(Left(Left(Left(Left(Left(Left(Right(a))))))))))) => /* Do something */ case Left(Left(Left(Left(Left(Left(Left(Left(Left(Right(a)))))))))) => /* Do something */ case Left(Left(Left(Left(Left(Left(Left(Left(Right(a))))))))) => /* Do something */ case Left(Left(Left(Left(Left(Left(Left(Right(a)))))))) => /* Do something */ case Left(Left(Left(Left(Left(Left(Right(a))))))) => /* Do something */ case Left(Left(Left(Left(Left(Right(a)))))) => /* Do something */ case Left(Left(Left(Left(Right(a))))) => /* Do something */ case Left(Left(Left(Right(a)))) => /* Do something */ case Left(Left(Right(a))) => /* Do something */ case Left(Right(a)) => /* Do something */ case Right(a) => /* Do something */ }  I was wondering if there is any way to implement some sort of recursive function to make my pattern matching cleaner. Something that would look more like this: data foreach { case Foo(a, 3) => /* Do something */ case Foo(a, 2) => /* Do something */ case Foo(a, 1) => /* Do something */ case Foo(a, 0) => /* Do something */ }  ### CompsciOverflow #### Natural language processing complexity Which natural language processing problems are NP-Complete or NP-Hard? I've searched the [natural-lang-processing] and [complexity-theory] tags (and related complexity tags), but have not turned up any results. None of the NLP questions that are recommended are helpful, the closest are the following: Why is natural language processing such a difficult problem? How is Natural language processing related to Artificial Intelligence What aspects of linguistics are necessary or good for natural language processing? The wikipedia page does not list any complexity results for NLP: http://en.wikipedia.org/wiki/List_of_NP-complete_problems#Formal_languages_and_string_processing The only lead I've found is the following paper: "Theoretical and Effective Complexity in Natural Language Processing" http://www.aclweb.org/anthology/O95-1007 Any help or pointers is appreciated! #### Is finding the smallest collection of subsets so that the number of elements among the subsets is <= the number of subsets NP-hard? Given a collection of non-empty subsets of \{1,2,\ldots,N\} (N not fixed), the problem is to find the smallest non-empty collection of subsets so that the number of distinct elements appearing among the union of subsets is less than or equal to the number of subsets chosen. This seems like an NP-hard problem but I can't prove it. Any help? ### StackOverflow #### Is there a Gatling2 REPL console? I would like to use a scala REPL console for Gatling to debug some code and evaluate it. Is there any easy way to do that? Is there a fast way to do a syntax check on Gatling scripts? ### /r/netsec #### Akamai State of the Internet Website ### /r/compsci #### A useful resource site for computer learners ### DataTau #### KDD 2014 Keynotes #### LinkedIn Economic Graph Challenge ### CompsciOverflow #### Operating System Paging concept I am quoting a paragraph from the book "Operating System Principles" by Galvin. Usually, each page-table entry is 4 bytes long, but that size can vary as well. A 32-bit entry can point to one of 2^{32} physical page frames. If frame size is 4 kB, then a system with 4-byte entries can address 2^{44} bytes (or 16 TB) of physical memory. Now, I know we have 2^{44} bytes of memory because we have 2^{32} page frames and each frame size is 4 kB, i.e. 2^{12} bytes of memory, so physical memory is 2^{32}\cdot2^{12} = 2^{44}. Please help me to understand the following: 1. How did we get the number of frames as 2^{32}? 2. If the logical memory space is 2^{32} then what should the physical memory space be (considering both the fully used and partially used logical address space concept)? ### StackOverflow #### ansible: include role in a role? Is it possible to reuse a role in a role? I do not mean via defining a dependency in the meta/main.yml file of a role but by including the role in the tasks/main.yml of another role directly? For example, I define a couple of basic roles in rolebooks and some more high level roles in roles. I want the high level roles to include some of the basic roles in addition to some specific tasks. playbooks/ rolebooks/ some_role/ roles/ webtier/ tasks/ main.yml  In playbooks/roles/webtier/tasks/main.yml: - shell: echo 'hello' - { role: rolebooks/some_role } - shell: echo 'still busy'  Thanks ### /r/freebsd #### Pavucontrol no cards available for configuration??? Hi, pardon me for being a little bit of a n00b. Still adjusting from the Linux side of things... Anyways, I installed pulseaudio and pavucontrol so I could use my USB headset. I also loaded the generic sound driver. I am using i3 as my desktop, and pavucontrol to control volume. On Arch Linux I simply went to the sound card config tab in pavucontrol, turned off my built in laptop speakers, and set the USB Headset to Generic Audio Duplex. But on FreeBSD when I go to the sound card config tab (Sound devices) it simply says no cards available for configuration. What exactly am I missing? I've read the handbook and it says to load the freebsd generic sound driver, and install pulseaudio and pavucontrol... Any suggestions? submitted by solenoidpuncher [link] [4 comments] ### Planet FreeBSD #### Sponsor Spotlight: Silicon Valley FreeBSD Developer and Vendor Summit The FreeBSD Foundation has been a long-time sponsor of events like the upcoming FreeBSD Developer and Vendor Summit. This year we would also like to thank Microsoft and RootBSD for their extended support of the event. Opportunities to bring the developer and vendor communities together to further the Project would not be possible without the support of companies like these two. Please take a minute and find out more about why these organizations are involved with the FreeBSD Project. Microsoft's customers have been clear that they want a single hypervisor for their environments, whether they are running Windows, Linux or FreeBSD operating systems. Microsoft is committed to working with the FreeBSD Foundation to ensure that FreeBSD is a first-class guest operating system on Windows Server Hyper-V and is focused on improving reliability, performance and support of new Hyper-V features in our upcoming updated release of BSD Integration Services. Find out more here. RootBSD is a provider of hosting services with an emphasis on the BSD family of operating systems. As users of FreeBSD ourselves, we believe it is important to contribute back to the community and do so by sponsoring services for individual developers as well as events such as the Developer's Summit. We are thrilled to be able to support the Silicon Valley Developer's Summit, as we've seen first hand the results that face-to-face meetings can have in sparking new ideas and discussions that might not happen through strictly online communication. Find out more about RootBSD here. ### StackOverflow #### Spark RDD any() and all() methods? I have an RDD[T] and a predicate T => Boolean. How do I calculate if all of items fit/do-not-fit the predicate? Of course I can do it like this: rdd .map(predicate) .reduce(_ && _)  but this will take full collection to iterate, that is an overkill. I tried another approach which worked good for local[1], but seemed to iterate a through everything on a real cluster too: rdd .map(predicate) .first()  [Fails with exception if can't find any of required] What is the canonical way to achieve this? #### Importing a text file into Cassandra using Spark when there are multiple variable types I'm using Spark to import data from text files into CQL tables (on DataStax). I've done this successfully with one file in which all variables were strings. I first created the table using CQL, then in the Spark shell using Scala ran: val file = sc.textFile("file:///home/pr.txt").map(line => line.split("\\|").map(_.toString)); file.map(line => (line(0), line(1))).saveToCassandra("ks", "ks_pr", Seq("proc_c", "proc_d"));  The rest of the files I want to import contain multiple variable types. I've set up the tables using CQL and specified the appropriate types there, but how do I transform them when importing the text file in spark? #### json4s object extraction with extra data I'm using spray with json4s, and I've got the implementation below to handle put requests for updating objects... My problem with it, is that I first extract an instance of SomeObject from the json, but being a RESTful api, I want the ID to be specified in the URL. So then I must somehow create another instance of SomeObject that is indexed with the ID... to do this, I'm using a constructor like SomeObject(id: Long, obj: SomeObject). It works well enough, but the implementation is ugly and it feels inefficient. What can I do so I can somehow stick the ID in there so that I'm only creating one instance of SomeObject? class ApplicationRouter extends BaseRouter { val routes = pathPrefix("some-path") { path("destination-resource" \ IntNumber) { id => entity(as[JObject]) { rawData => val extractedObject = rawData.camelizeKeys.extract[SomeObject] val extractedObjectWithId = SomeObject(id, extractedObject) handleRequest(extractedObjectWithId) } } } } case class SomeObject(id: Long, data: String, someValue: Double, someDate: DateTime) { def this(data: String, someValue: Double, someDate: DateTime) = this(0, data, someValue, someDate) def this(id: Long, obj: SomeObject) = this(id, obj.data, obj.someValue, obj.someDate) }  ### Planet Theory #### Any polynomial which is hard to count but easy to decide? Every monotone arithmetic circuit, i.e. a \{+,\times\}-circuit, computes some multivariate polynomial F(x_1,\ldots,x_n) with nonnegative integer coefficients. Given a polynomial f(x_1,\ldots,x_n), the circuit • computes f if F(a)=f(a) holds for all a\in \mathbb{N}^n; • counts f if F(a)=f(a) holds for all a\in\{0,1\}^n; • decides f if F(a)>0 exactly when f(a)>0 holds for all a\in\{0,1\}^n. I know explicit polynomials f (even multilinear) showing that the circuit-size gap "computes/counts" can be exponential. My question concerns the gap "counts/decides". Question 1: Does anybody know of any polynomial f which is exponentially harder to count than to decide by \{+,\times\}-circuits? As a possible candidate, one could take the PATH polynomial whose variables correspond to edges of the complete graph K_n on \{1,\ldots,n\}, and each monomial corresponds to a simple path from node 1 to node n in K_n. This polynomial can be decided by a circuit of size O(n^3) implementing, say, the Bellman-Ford dynamic programming algorithm, and it is relatively easy to show that every \{+,\times\}-circuit computing PATH must have size 2^{\Omega(n)}. On the other hand, every circuit counting PATH solves the \#PATH problem, i.e. counts the number of 1-to-n paths in the specified by the corresponding 0-1 input subgraph of K_n. This is a so-called \#P-complete problem. So, we all "believe" that PATH cannot have any counting \{+,\times\}-circuits of polynomial size. The "only" problem is to prove this ... I can show that every \{+,\times\}-circuit counting a related Hamiltonian path polynomial HP requires exponential size. Monomials of this polynomial correspond to 1-to-n paths in K_n containing all nodes. Unfortunately, the reduction of \#HP to \#PATH by Valiant requires to compute the inverse of the Vandermonde matrix, and hence cannot be implemented by a \{+,\times\}-circuit. Question 2: Has anybody seen a monotone reduction of \#HP to \#PATH? And finally: Question 3: Was a "monotone version" of the class \#P considered at all? N.B. Note that I am talking about a very restricted class of circuits: monotone arithmetic circuits! In the class of \{+,-,\times\}-circuits, Question 1 would be just unfair to ask at all: no lower bounds larger than \Omega(n\log n) for such circuits, even when required to compute a given polynomial on all inputs in \mathbb{R}^n, are known. Also, in the class of such circuits, a "structural analogue" of Question 1 -- are there \#P-complete polynomials which can be decided by poly-size \{+,-,\times\}-circuits? -- has an affirmative answer. Such is, for example, the permanent polynomial PER=\sum_{h\in S_n}\prod_{i=1}^n x_{i,h(i)}. ### QuantOverflow #### Inflation/Rates Correlation I've been looking into a short piece of maths a colleague has written on pricing inflation with payment delays, and was hoping someone could confirm whether my understanding is correct, or if my colleague's calculation is invalid, then why. The setup: suppose, for example, suppose you have an inflation-linked cashflow that captures the level of inflation between 2014 - 2015, but this cashflow isn't paid until a year later in 2016. Then the value of this cashflow is not necessarily just the expected 2014 \rightarrow 2015 inflation discounted back from 2016, because you may have some correlation between interest rates and inflation. In essence, \mathbb{E}(\text{I}_{2015} \cdot \text{DF}_{2016} ) \ne \mathbb{E}(\text{I}_{2015}) \cdot \mathbb{E}(\text{DF}_{2016} ) if your discount factor and inflation are correlated. This is not surprising - if inflation and interest rates are +100% correlated, then an increase in inflation is partially "hedged" by rates rising and your discount factors shrinking, and conversely if they're -100% correlated then a rise in inflation will be further compounded by lower discount rates. In either case, the expected value of your cashflow is nonlinear vs inflation, and you would expect it to be some function of correlation. My aim therefore is to understand the magnitude of the "convexity adjustment" \Delta, where \mathbb{E}(\text{I}_{2015} \cdot \text{DF}_{2016} ) = \Delta \cdot ( \mathbb{E}(\text{I}_{2015}) \cdot \mathbb{E}(\text{DF}_{2016} )). The maths my colleague wrote to value this adjustment is below. I am aware it's relatively crude, it doesn't need to be perfect for my purposes - I just want something in the right ballpark and better than making no adjustment at all. There are more sophisticated discussions of the maths such as this paper but I'd like to understand the simplified calculation first if possible. Let I_T = inflation fixing at time T, but paid on lagged date T_L = T + D. Write your discount factor from time a to b, as seen at time c, as \delta_c(a,b). Then a payment of I_T at time T_L is equivalent to I_T \cdot \delta_{T}\,(T,T_L) at time T. Letting r be the short-term interest rate observed at T, we can say \delta_{T}\,(T,T_L) = e^{-r D}. Suppose now that I_T is lognormally distributed, variance \sigma_I^2 T, and r is normally dist. with variance \sigma_r^2 T, and correlation is \rho. Then we have \mathbb{E}(I_T \cdot \delta_T(T,T_L)) = \exp{[-\rho \cdot \sigma_I \cdot \sigma_r \cdot D \cdot T]} \cdot \mathbb{E}(I_T) \cdot \delta_0(T,T_L) \,\,\,\, (*) and the first term is your convexity adjustment \Delta. My first question is: is that all maths valid? I'm a little suspicious of saying \delta_{T}\,(T,T_L) = e^{-r D} for a "short term rate" r if D is a long delay. It seems like by simply multiplying by D, you're simultaneously treating r as fixed and not fixed. Or is the key point that once you reach time T r is observed and can be treated as "fixed"? Assuming the maths is all ok, is (*) just a consequence of the fact that for correlated normal R.V.s X,\,Y we have \mathbb{E}(e^X e^{-Y}) = \mathbb{E}(e^{X-Y})= and X - Y \sim N(\mu_X - \mu_Y, \sigma_X^2 +\sigma_Y^2 - 2 \rho \sigma_X \sigma_Y)? If I understand correctly think you just use that relation and substitute in for X and Y. Lastly, in this context would \sigma_I and \sigma_r both be normal vols? That's how I read the maths, but the numerical example I have suggests that \sigma_I should be more like a lognormal order of magnitude (e.g. nearer to 10% than 1%). Many thanks for your help. ### StackOverflow #### Can not see some local variables in debugger within intellij for some scala programs As described in the title there are some cases that Intellij is not able to recognized/display some of the local variables. As can be seen, some of the local variables e.g. outarr and arrptr are already set: but the debugger does not know about them. I am running inside IJ 13.1.4 in a maven project and have enabled debugging info as follows:  <configuration> <args> .. <arg>-feature</arg> <arg>-g:notc</arg> </args> ..  My question is about : does anyone recognize this problem and has come up with workaround(s) for it? Update Per suggestion on an answer here is the result of trying Alt-F8 ### High Scalability #### Stuff The Internet Says On Scalability For October 24th, 2014 Hey, it's HighScalability time: This is an ultrasound powered brain implant! (65nm GP CMOS technology, high speed, low power (100 µW)) • 70: percentage of the worlds transactions processed using COBOL. • Quotable Quotes: • John Siracusa: Apple has shown that it wants to succeed more than it fears being seen as a follower. • @Dries: "99% of Warren Buffett's wealth was built after his 50th birthday." • @Pinboard: It is insane to run a bookmarking site on AWS at any kind of scale. Unless you are competing with me, in which case it’s a great idea—do it! • @dvellante: I sound like a broken record but AWS has the scale to make infrastructure outsourcing marginal costs track SW curve • @BrentO: LOL RT @SQLPerfTips: "guess which problem you are more likely to have - needing joins, or scaling beyond facebook?" • @astorrs: Legacy systems? Yes they're still relevant. ~20x the number of transactions as Google searches @IBM #DOES14 • @SoberBuildEng: "It was all the Agile guys' fault at the beginning.Y'know, if the toilet overflowed, it was 'What, are those Agile guys in there?!'" #DOES14 • @cshl1: #DOES14 @netflix "1.8M revenue / employee" << folks, this is an amazing number • Isaac Asimov: Probably more inhibiting than anything else is a feeling of responsibility. The great ideas of the ages have come from people who weren’t paid to have great ideas, but were paid to be teachers or patent clerks or petty officials, or were not paid at all. The great ideas came as side issues. • With Fabric can Twitter mend the broken threads of developer trust? A good start would be removing 3rd party client user limit caps. Not sure a kit of many colors will do it. • Not only do I wish I had said this, I wish I had even almost thought it. tjradcliffe: I distinguish between two types of puzzles: human-made (which I call puzzles) and everything else (which I call problems.) In those terms, I hate puzzles and love problems. Puzzles are contrived by humans and are generally as much psychology problems as anything else. They basically require you to think like the human who created them, and they have bizarre and arbitrary constraints that are totally unlike the real world, where, as Feyrabend told us, "Anything goes." • David Rosenthal with a great look at Facebook's Warm Storage: 9 [BLOB] types have dropped by 2 orders of magnitude within 8 months...the vast majority of the BLOBs generate I/O rates at least 2 orders of magnitude less than recently generated BLOBs...Within a data center it uses erasure coding...Between data centers it uses XOR coding...When fully deployed, this will save 87PB of storage...heterogeneity as a way of avoiding correlated failures. • Gene Tene on is it a CPU bound future: I don't think CPU speed is a problem. The CPUs and main RAM channels are still (by far) the highest performing parts of our systems. For example, yes, you can move ~10-20Gbps over various links today (wired or wifi, "disk" (ssd) or network), but a single Xeon chip today can sustain well over 10x that bandwidth in random access to DRAM. A single chip has more than enough CPU bandwidth to stream through that data, too. E.g. a single current Haswell core can move more than that 10-20Gbps in/out of it's cache levels. and even relatively low end chips (e.g. laptops) will have 4 or more of these cores on a single chip these days. < BTW, a great thread if you are interested in latency issues. Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)... ### StackOverflow #### Determining if an attribute exists in Datomic (Datomisca) I am trying to find an efficient way to determine if a Datomic attribute is already defined in a database. I am using the Scala wrapper Datomisca. Here is the best way that I have found so far: val exists = Datomic.q(Query(""" [:find ?attr :in :where [_ :db.install/attribute ?i] [?i :db/ident ?part] ]"""), Datomic.database).map { case DKeyword(keyword) => keyword.toString }.contains(":some/attribute")  but I guessing that there is a better way. I'm running Datomic-pro 0.9.4755 with Datomisca 0.6. ### /r/compsci #### RSS CompSci feeds Can anybody recommend some CompSci/ technology RSS feeds? They are for a fairly general audience who are interested in CompSci but not experts, so nothing too niche. submitted by lollipoplaura [link] [2 comments] ### StackOverflow #### Compose Scalaz validations I would like to use Scalaz for validations and like to be able to reuse the validation functions in different contexts. I'm totally new to Scalaz btw. Let's say I have these simple checks: def checkDefined(xs: Option[String]): Validation[String, String] = xs.map(_.success).getOrElse("empty".fail) def nonEmpty(str: String): Validation[String, String] = if (str.nonEmpty) str.success else "empty".fail def int(str: String): Validation[String, Int] = ...  I like to be able to compose validations where output from one is fed into the other. I could easily do that with flatMap or via for comprehensions but it feels like there must be a better way than that. for { v1 <- checkDefined(map.get("foo")) v2 <- nonEmpty(v1) v3 <- int(v2) v4 <- ... } yield SomeCaseClass(v3, v4)  or val x1 = checkDefined(map get "foo").flatMap(nonEmpty).flatMap(int) val x2 = check(...) // How to combine x1 and x2?  Any thoughts from the Scalaz experts out there? ### CompsciOverflow #### Co NP problem correponding to SAT I've been reading lately about SAT problem, NP and co-NP. Many sources say that the SAT problem is co NP, though, I can't find a co-NP problem equivalent to SAT. Does anyone have any idea about this ? (Or could lighten my misunderstandings) #### Finding approximately collinear points my issue comes from image processing. The image below is for a better understanding of the problem. I have to detect three dots (shown in black, marked with red arrows). The other points I get from my image after processing it are shown in green. As you can see I get many more points from the image than just the black ones, due to other structures inside the picture. So I've got a set of points and I know that there are at least three of them that are on a curve which is approximately a line and the approximate distance between the points is also known. I need to find these three points from my set. The algorithm should be able to detect whether there is such a structure in the picture or there isn't one. ### StackOverflow #### pure functional javascript - solving sample prob I wrote a decodeMsg function which was given as a problem on codeeval, here's my code: http://bit.ly/1rDIvCh It works but I'm trying to learn and get more into pure functional javascript. I would like to kindly ask anyone to assist me with a cleaner, faster and pure functional way to do this type of function? Thank you all in advance! ### CompsciOverflow #### Permutations in an k-sorted array Definition of k-sorted array: An array in which an element is at-most k places away from its sorted order. I have a question in my Algorithms assignment which asks to prove the lower bound to sort a k-sorted array as \Omega(n\log{k}). I was trying to approach this question by using the standard comparison sort proof (\Omega(n!)), where we have n! total permutations of the sequence [a_1,a_2,\cdots,a_n]. My friend told me that there are k^{(n-k)}k! permutations of a k-sorted array. I'm finding it difficult to prove the same. How should I go about finding the total permutations of a k-sorted array? Try for n=5, k=2. Total permutation is 3\times 3\times 3\times 2\times 1, First element has 3 possibilities \{1,2,3\}; second has 4 possibilities \{1,2,3,4\}, but one position is already taken by the first,so effectively it has (4-1) slots; third has 5 possibilities \{1,2,3,4,5\}, but first and second are there in one of these two, hence 3 possibilities; Fouth one has 4 slots \{2,3,4,5\} and 3 of them are taken? (doubt: because 1st position can also be occupied) , hence 2 , and the last has one. ### QuantOverflow #### Locked or Crossed Markets I don't understand why Rule 610 from Reg NMS was introduced: what was the problem with locked markets? I have read that one of the issues is that it forced a market maker (say, from Nasdaq) who needed to buy a stock (and who had the obligation to provide the best possible price for a client) to go and "pick" the locking Sell Limit Order from the locking venue (say, Instinet) and pay the taker's fee. However, this would have been true for every time in which a competing ECN was posting an ask quote that was lower than the one posted by the market maker and not only when the markets were locked... Am I missing something? ### StackOverflow #### LightTable Clojure Watches not Updating on Atom When I add a watch to a variable in clojure and rebind it, the watch is updated dynamically. (def x "jlkfds") x  In the above example x will always reflect its value. However, when I try and do this using an atom, I have no luck. I have to execute the whole thing again to get the changes to reflect in either the instarepl or the watch. (defonce y (atom 10)) @y *38* (swap! y inc) *80*  In the above example I have executed the swap without executing the deref, and they have thus become out of sync. What confuses me is that I saw a Javascript demo where someone (Chris) was able to watch the coordinates of a mouse pointer dynamically change. I really like the idea of having this functionality. Is there a way to do the same thing in Clojure? Like this? http://youtube.com/watch?v=d8-b6QEN-rk Thank you #### How to prove "~(nat = False)", "~(nat = bool)" and "~(nat = True)" in coq The following two propositions are easy to prove. Theorem nat_eq_nat : nat = nat. Proof. trivial. Qed. Theorem True_neq_False : ~(True = False). Proof. unfold not. intros. symmetry in H. rewrite H. trivial. Qed.  But when I tried to prove a slightly different proposition ~(nat = False), I found the rewrite tactic doesn't work. It reports Error: Refiner was given an argument "fun x : Set => x" of type "Set -> Set" instead of "Set -> Prop". So I tried to write a lemma. Lemma Type_eq_prod : forall (a : Type) (b : Type), a = b -> (a -> b). Proof. intros. rewrite <- H. trivial. Qed. Theorem nat_neq_False : ~(nat = False). Proof. unfold not. intros. apply (Type_eq_prod nat False) in H. inversion H. apply 0. (*no subgoals left*)  All works fine until now. But when I tried to Qed it, it reported Error: Illegal application (Type Error): The term "Type_eq_prod" of type "forall a b : Type, a = b -> a -> b" cannot be applied to the terms "nat" : "Set" "False" : "Prop" "H" : "nat = False" "0" : "nat" The 3rd term has type "nat = False" which should be coercible to "nat = False".  The following are another two propositions that make me stuck. Theorem nat_neq_bool : ~(nat = bool). Proof. unfold not. intros. Abort. Theorem nat_neq_true : ~(nat = True). Proof. unfold not. intros. Abort.  My questions are:  1.Why the rewrite tactic doesn't work with proposition ~(nat = False). 2.Why can't I Qed it when there is no subgoals. 3.How to prove the aborted propositions above or is it possible to prove or prove the negates of them in coq.  ### /r/compsci #### CS/ AI in gaming: Deepmind/ Google, Oculus Rift, Minecraft/ Microsoft, quick survey with many links ### StackOverflow #### Result type of an implicit conversion must be more specific than AnyRef Let def h(a: AnyRef*) = a.mkString(",") h: (a: AnyRef*)String  and so h("1","2") res: String = 1,2  However, h(1,2) error: the result type of an implicit conversion must be more specific than AnyRef h(1,2) ^ error: the result type of an implicit conversion must be more specific than AnyRef h(1,2) ^  This is at least in Scala 2.11.1 and 2.11.1. To ask on a workaround. #### OCaml. Return first n elements of a list I am new to OCaml and functional programming as a whole. I am working on a part of an assignment where I must simply return the first n elements of a list. I am not allowed to use List.Length. I feel that what I have written is probably overly complicated for what I'm trying to accomplish. What my code attempts to do is concatenate the front of the list to the end until n is decremented to 1. At which point the head moves a further n-1 spots to that the tail of the list and then return the tail. Again, I realize that there is probably a much simpler way to do this, but I am stumped and probably showing my inability to grasp functional programming.  let rec take n l = let stopNum = 0 - (n - 1) in let rec subList n lst = match lst with | hd::tl -> if n = stopNum then (tl) else if (0 - n) = 0 then (subList (n - 1 ) tl ) else subList (n - 1) (tl @ [hd]) | [] -> [] ;;  My compiler tells me that I have a syntax error on the last line. I get the same result regardless of whether "| [] -> []" is the last line or the one above it. The syntax error does not exist when I take out the nested subList let. Clearly there is something about nested lets that I am just not understanding. Thanks. ### Lobsters #### Obese crash test dummies the key to preventing road deaths? ### Planet Emacsen #### Irreal: let with Lexical and Dynamic Scope Artur Malabarba points to this excellent Stack Exchange entry on the speed of let with lexical versus dynamic scope. Malabarba asks why let is faster with lexical scope than it is with dynamic scope. lunaryorn provides an excellent and detailed answer that shows the generated byte code for both cases. The TL;DR is that using dynamic scope means that the let variables have to be looked up in the global scope, set, and then reset after use, while using lexical scope just makes the variables local and avoids all lookup and setting/resetting. That may sound a little opaque but lunaryord's answer explains things in a very understandable way. Generally, I don't worry too much about speed in the Elisp I write because it's mostly just simple functions that run quickly no matter how ham handed my coding is. If you write functions that have “long” running times, it's worthwhile to take the lessons in lunaryord's answer into account. It is, in any event interesting and worth knowing for the day you need it. ### StackOverflow #### Is it possible to change a flow of ZMQ messages with routing to a physical address? I might be misunderstanding this, but it seems like when using a ROUTER/DEALER pattern in zmq when the request is recieved by the responder it will have to travel back up the way it came. It seems like it would incur less network latency to have the replier respond directly to the remote requester. Is there any way to do this, maybe by passing the physical address instead of the id? Is it possible to create this sort of system? I would like to see a transmission chain something like this REQ->ROUTER->DEALER->REP->REQ, where both occurences of REQ represent the same machine #### Sending a message to a specific routee in a pool Say I have a pool of Routee actors and each of these actors has its own 'subordinate' actor it needs to communicate with. When a the subordinate sends a message back to the 'parent' routee it seems that this message is also being passed through router so there is no guarantee that it will make it to the appropriate router. So take the code below: class MyActor extends Actor { val router = context.actorOf(FromConfig.props(Props(new Routee)), "myrouter") def receive = { case msg: SomeMsg => router ! msg } } class Routee extends Actor { val sub = context.actorOf(Props(new Subordinate(this))) var waiting = false def receive = { case msg: SomeMsg => if (! waiting) { waiting = true sub ! msg } case ack: SomeAck => waiting = false if (ack.routee != this) println("From a different subordinate") } } class Subordinate(routee: Routee) extends Actor { def receive = { case msg: SomeMsg => sender ! SomeAck(routee) } }  So if I run the code below this will cause the "From a different subordinate" message to printed:  val actor = ActorSystem("test").actorOf(Props(new MyActor), "myactor") while (true) actor ! SomeMsg()  There is no guarantee that I the acknowledgement is sent back to the appropriate routee. Is it the case that the acknowledgement is being passed through the router and if so is there a way around this? ### CompsciOverflow #### What is the Big O of T(n)? I have a homework that I should find the formula and the order of T(n) given by$$T(1) = 1 \qquad\qquad T(n) = \frac{T(n-1)}{T(n-1) + 1}\,. $$I've established that T(n) = \frac{1}{n} but now I am a little confused. Is T(n) \in O(\frac{1}{n}) the correct answer for the second part? Based on definition of big-O we have that$$O(g(n)) = \{f(n) \mid \exists c, n_0>0\text{ s.t. } 0\leq f(n) \leq cg(n)\text{ for all } n\geq n_0\}\,.$$This holds for f(n) = g(n) = \frac{1}{n} so, based on the definition, O(\frac{1}{n}) should be correct but, in the real world it's impossible that algorithm can be faster than O(1). #### Need help to derive the Complexity! Please help me regarding this one! What will be the communication and computation complexity of this code?? While (m>0) do { While ((m mod 2)==0) do { m ← ⌊m/2⌋ //parties communicate m times; } m = m – 1; }  This code been modified from the book "Fundamental of computer algorithm" by Sahni.So according to that, it seemed computational complexity is O(logm) as m decreases by a factor of at least 2. But can i say communication complexity is O(logm) as well?? Probably not; i'm really confused!! ### /r/emacs #### Efficiently using a large monitor I have a nice big monitor — 2560x1600 — and verily, it is good. There is space for 3 windows of code, 80 lines tall, across a full-screen frame with a comfortable font size. However, making best use of many windows is a challenge. I'm curious what other people do? I recently made a function (below) which I'm finding very useful to return to a good working setup after it's disrupted. It re-configures the split and fills the windows with the most recent buffers from (projectile-project-buffers), falling back to (buffer-list) when not in a project. (side-note: projectile is great, don't know why I ignored it for so long) I can imagine though, some more sophisticated solution. I'd like to work in the center window (of 3) with the others a dynamically updating to the stack of most recent buffers, so when I switch buffers they shuffle down. To make that most useful though I'd need the ability to fix particular window-buffer combos to prevent the shuffle down. It perhaps could be quite easy if I modified my function to make it less destructive, and to have it respect something like window-dedicated-p. (defun working-split (window-count) "Make vertical splits for working window setup. If optional argument WINDOW_COUNT is omitted or nil, default to max splits of at least 90 chars wide " (interactive "P") (let ((window-count (if window-count window-count (/ (frame-width) 90))) (show-buffers (if (projectile-project-p) (projectile-project-buffers) (remove-if 'minibufferp (buffer-list))))) (delete-other-windows) ;; split window appropriate count - make 2nd window current (dotimes (i (- window-count 1)) (split-window-horizontally) (if (= i 0) (other-window 1))) (balance-windows) ;; set window buffer from show-buffers list (mapcar* 'set-window-buffer (window-list) show-buffers)))  To go along with my function I have key chords defined to jump one window left or right, because I'm not going to lift my hands to the arrow keys... (key-chord-define-global "qj" 'windmove-left) (key-chord-define-global "qk" 'windmove-right)) (key-chord-define-global "ql" (lambda () (interactive) (fixfont nil) (working-split nil))) (key-chord-define-global "qf" 'bury-buffer)  'ql' is kind of analogous to C-l, so I like that chord choice. My fixfont function adjusts the font depending on screen size, for when I undock the laptop. submitted by EatMoreCrisps [link] [14 comments] ### StackOverflow #### Semantics of functional prototype-based programming in Javascript Let's say we inherit 90% of code functionality from a boilerplate prototype-based funciton: var BoilerplateForOtherFunctions = function(){}; BoilerplateForOtherFunctions.prototype.addOneYear = function(){ return this.myAge += 1; }; BoilerplateForOtherFunctions.prototype.getAge = function(myUrl){ return this.myAge; };  And we have many functions that inherits this prototype and used it like this: var MyNormalFnOne = function(){ this.myAge = "34"; console.log( this.getAge() ); this.addOneYear(); }; // only injects the boilerplate object into our function underscore.extend(MyNormalFnOne.prototype, BoilerplateForOtherFunctions.prototype); new MyNormalFnOne();  Everything works well. Problem is - that we rely on a fact, that someone has defined this.myAge before we use anything from the boilerplate prototype. This is very common solution of programming in Javascript. Wouldn't be better to pass all arguments into the prototype fn and be more functional? It would lead to faster debugging, allows function cashing, creates less coding mistakes and program errors. Into something like this (more functional approach): var BoilerplateForOtherFunctions = function(){}; BoilerplateForOtherFunctions.prototype.addOneYear = function(age){ return age += 1; }; BoilerplateForOtherFunctions.prototype.getAge = function(age){ return age; // makes no sense, here - just a sample, of course }; var MyNormalFnOne = function(){ var myAge = "34"; console.log( this.getAge(myAge) ); myAge = this.addOneYear(myAge); }; // only injects the boilerplate object into our function underscore.extend(MyNormalFnOne.prototype, BoilerplateForOtherFunctions.prototype); new MyNormalFnOne();  The problem with second appoarch is (with a functional one) - passing many arguments to every function - return value must be assigned again, it doesn't "magically" work The problem with first appoarch is (with a traditionals) - difficult debugging, not reliable code, no fn cashing, ... Any opinion? ### Lobsters #### The Case of the Modified Binaries ### StackOverflow #### Clojure primitive array type metadata I've made a simple performance test: create 900000 size array and read all of it's elements. (time (let [array (byte-array 900000)] (loop [i (- 900000 1)] (when (< 0 i) (aget array i) (recur (- i 1)))))) "Elapsed time: 10.244612 msecs"  Then I wanted to determine type of array to create dynamically and here is defined tautology hashmap for simplicity: (def types {:byte-array byte-array :int-array int-array})  Now I am running test again and getting a great performance gap: (time (let [array ((types :byte-array) 900000)] (loop [i (- 900000 1)] (when (< 0 i) (aget array i) (recur (- i 1)))))) "Elapsed time: 7190.233155 msecs"  And the workaround: (time (let [^bytes array ((types :byte-array) 900000)] (loop [i (- 900000 1)] (when (< 0 i) (aget array i) (recur (- i 1)))))) "Elapsed time: 12.48304 msecs"  The problem is how to type hint clojure dynamically? Does anyone know what happens under the hood? ### /r/scala #### What's the difference between def addOne and val addOne? I'm reading Scala school and am wondering how these two statements differ: val addOne = (x: Int) => x + 1 def addOne(m: Int): Int = m + 1  submitted by metaperl [link] [7 comments] ### StackOverflow #### Lists and map orderings In Scala, if we have List[(Char, Int)] ordered by the first element of each pair, does converting to a map via toMap preserve the ordering? If so, does converting the resulting map back to list via toList preserve ordering? ### Lobsters #### AtScript: Google proposes runtime type checks for JavaScript ### StackOverflow #### Missing IOManager in Akka 2.3.6 Hi I searched the documentation and the migration page, but I was not able to find the replacement or a explanation what actually happened to IOManager since akka 2.2.4?! Any help appreciated. Thanks! #### F# version of Scala code using zipWithIndex This Scala code (somewhat simplified by me, so I might have made a mistake) converts a series of newline separated input lines into a list of coordinates of non-space characters within those lines. Below it is my failed attempt to convert it into F#. The first coordinate is the index of the character within the line, the second is the coordinate of the line within the string. F# doesn't seem to have an equivalent of zipWithIndex (though it does have a map overload that provides an index), and the syntax of sequence expressions is quite different to that of Scala's generator expressions, and I am struggling to write the F# equivalent. The Scala code: val input = "X \n X" def charCoors(input: String) = for { (xs, y) <- input.split('\n').map(_.zipWithIndex).zipWithIndex.iterator (c, x) <- xs.iterator if c != ' ' } yield Coord(x, y)  My broken F# attempt: let splitLines (s:string) = List.ofSeq(s.Split([|'\n'|])) let input = "X \n X" let charCoords input = let lines = splitLines input seq { for (y, line) in List.map (fun y line -> (y, line)) lines do yield! for (x, character) in List.map (fun x character -> (x, character)) do if char != ' ' then yield (x, y) }  #### Scala multiple type conformance I have the following code and will like to call a method on top of a class that implements the trait EventTraces[T] and at the same time T should implement the trait Event (e.g as in doSomethingOnTopOfAnEventTrace()) trait Event class ConcreteEvent[T <: Event] trait EventTrace[T <: Event] class ConcreteEventTrace[T <: Event] extends EventTrace[T] val concreteEventTrace : ConcreteEventTrace[ConcreteEvent] = new ConcreteEventTrace(new ConcreteEvent) def doSomethingOnTopOfAnEventTrace[T <: Event, Z <: EventTrace[T]](eventTrace: Z) { println("Action on top of a Any kind of EventTrace of any type of Event") }  However calling doSomethingOnTopOfAnEventTrace(concreteEventTrace) gives me the following error: Error:(129, 3) inferred type arguments [Nothing,ConcreteEventTrace[ConcreteEvent]] do not conform to method doSomethingOnTopOfAnEventTrace type parameter bounds [T <: Event,Z <: EventTrace[T]] doSomethingOnTopOfAnEventTrace(concreteEventTrace) ^ Error:(129, 38) type mismatch; found : ConcreteEventTrace[ConcreteEvent] required: Z doSomethingOnTopOfAnEventTrace(concreteEventTrace) ^ #### What syntax core.logic matche, defne pattern matching constructs use? Some of core.logic constructs (matcha, matche, matchu, defne, fne) use pattern matching expressions as body and can be used such as: (run* [q] (fresh [a o] (== a [1 2 3 4 5]) (matche [a] ([ [1 2 . [3 4 5] ]] (== q "first")) ([ [1 2 3 . [4 5] ]] (== q "second")) ([ [1 . _] ] (== q "third"))))) ;=> ("first" ; "second" ; "third")  (example from Logic-Starter wiki) But I can't find specification of syntax for pattern matching in core.logic documentation. What is this syntax? Maybe I can find it in some minikanren docs or books? • What is difference between matched variables prefixed with ? and without it? • Is there any other destructing constructs in addition to lists with . (similar to & in clojure)? • Will [_ _] match only sequences with two elements? • Is it possible to destruct maps? #### How to change attribute on Scala XML Element I have an XML file that I would like to map some attributes of in with a script. For example: <a> <b attr1 = "100" attr2 = "50"/> </a>  might have attributes scaled by a factor of two: <a> <b attr1 = "200" attr2 = "100"/> </a>  This page has a suggestion for adding attributes but doesn't detail a way to map a current attribute with a function (this way would make that very hard): http://www.scalaclass.com/book/export/html/1 What I've come up with is to manually create the XML (non-scala) linked-list... something like: // a typical match case for running thru XML elements: case Elem(prefix, e, attributes, scope, children @ _*) => { var newAttribs = attributes for(attr <- newAttribs) attr.key match { case "attr1" => newAttribs = attribs.append(new UnprefixedAttribute("attr1", (attr.value.head.text.toFloat * 2.0f).toString, attr.next)) case "attr2" => newAttribs = attribs.append(new UnprefixedAttribute("attr2", (attr.value.head.text.toFloat * 2.0f).toString, attr.next)) case _ => } Elem(prefix, e, newAttribs, scope, updateSubNode(children) : _*) // set new attribs and process the child elements }  Its hideous, wordy, and needlessly re-orders the attributes in the output, which is bad for my current project due to some bad client code. Is there a scala-esque way to do this? ### Fefe #### Guckt mal, wie die Daily Mail die Nachzahlungen der ... Guckt mal, wie die Daily Mail die Nachzahlungen der Briten verkauft. Der Artikel dazu. Hintergrund ist, dass die Briten seit Gründung der EU Sonderkonditionen haben und viel weniger einzahlen als die anderen Länder. Damit ist eigentlich seit Jahren Schluss, aber die EU hat ihnen immer wieder Aufschub gewährt. ### Portland Pattern Repository #### Elias Sinderson (by dslb-178-003-158-169.178.003.pools.vodafone-ip.de 34 hours ago) ### DataTau #### London Data Store (590 London datasets) ### StackOverflow #### Ansible multiple hosts with port forwarding I have hosts inventory with multiple hosts each with port forwarding, Hosts file is : [all] 10.80.238.11:20003 10.80.238.11:20001 10.80.238.11:20007 10.80.238.11:20009  I am trying to ping them with a playbook, but always get response from first entry in this case 10.80.238.11:20003 not from others. Authentication is on place, whatever host I move to first place I get response from it but not others, my playbook is: --- - hosts: all remote_user: root gather_facts: no tasks: - name: test connection ping:  Any idea how to fix this??? ### Fred Wilson #### Feature Friday: Etsy In Real Life This week our portfolio company Etsy introduced Etsy Reader, a dongle for your phone or tablet that allows Etsy sellers to sell on Etsy in real life. The natural reaction to this would be “Etsy knocked off Square” and to some degree that would be correct. But Etsy Reader is not just a card reader. There is quite a bit of software behind the scenes that connects the checkout experience to the seller’s shop on Etsy and all of the seller tools that Etsy provides. The better way to think about this is that Etsy Reader extends a seller’s Etsy Store to the real world of craft fairs, flea markets, and other in person experiences. Etsy Reader is about coming full circle at Etsy. In the early days, back in 2005 and 2006 when we first invested, Etsy was built seller by seller, at craft fairs, with street teams manning Etsy booths and evangelizing a new way to find customers and meet other like minded people. Etsy sellers still sell a lot at craft fairs and other face to face environments. Now with Etsy Reader, the shop can be online or offline with everything tied together. I am excited to see Etsy Reader come to market. It’s been a dream for a while now and props to Camilla and her team for getting it out the door. Well done. ### StackOverflow #### Concurrent Akka Agents in Scala I'm working on a scala project right now, and I've decided to use Akka's agent library over the actor model, because it allows a more functional approach to concurrency.However, I'm having a problem running many different agents at a time. It seems like I'm capping at only having three or four agents running at once. import akka.actor._ import akka.agent._ import scala.concurrent.ExecutionContext.Implicits.global object AgentTester extends App { // Create the system for the actors that power the agents implicit val system = ActorSystem("ActorSystem") // Create an agent for each int between 1 and 10 val agents = Vector.tabulate[Agent[Int]](10)(x=>Agent[Int](1+x)) // Define a function for each agent to execute def printRecur(a: Agent[Int])(x: Int): Int = { // Print out the stored number and sleep. println(x) Thread.sleep(250) // Recur the agent a sendOff printRecur(a) _ // Keep the agent's value the same x } // Start each agent for(a <- agents) { Thread.sleep(10) a sendOff printRecur(a) _ } }  The above code creates an agent holding each integer between 1 and 10. The loop at the bottom sends the printRecur function to every agent. The output of the program should show the numbers 1 through 10 being printed out every quarter of a second (although not in any order). However, for some reason my output only shows the numbers 1 through 4 being outputted. Is there a more canonical way to use agents in Akka that will work? I come from a clojure background and have used this pattern successfully there before, so I naively used the same pattern in Scala. #### composition and partial composition of functions in Clojure I have a nested vector of vectors, like [[1 2 3] [4 5 6] [7 8 9]] and I want to increment the values of each sub-vector, then find the max of each sub-vector. The formulation I'm using is: (map (comp (partial apply max) (partial map inc)) [[1 2 3] [4 5 6] [7 8 9]])  Is there a better way? #### How to architect a prediction system using Scala, Spring MVC, AKKA, MySQL, Maven & Open MQ? Intro We currently have a system in Java, Spring, JSP, Ant & MySQL which does prediction on financial records of our logged in users and sends emails to our customers if there is something spooky happening or when something looks out of the ordinary. Current the system predicts very well and it works great. Problem The problem is that the previous developers wrote ALOT of code which is not scalable and not reusable. We've got code duplication, Hibernate entities are not fetching objects but IDs, SQL database does not have any foreign key constraints (orphan data) or foreign key constraints, too much code has been written due to lack of knowledge of the frameworks used. Refactoring We plan on working on refactoring the code side by side using Scala. Can someone please tell me how to architect a system which does the following 1. Requirement 1 - Collects financial data & social data & other data from third party providers for running the algorithms to do the prediction and analysis. 2. Requirement 2 - Run analysis on the data collected and store the prediction on our core MySQL database. 3. Requirement 3 - Sends emails to users of daily analysis results every morning at 9 AM or straight after the analysis gets run if there is any abnormal account activities. 4. Requirement 4 - Allow clients to view a interactive website where they can input custom variables to view their financial performance and compare financial data. 5. Requirement 5 - Admin website to manage the system and re-run analysis manually or re-resend email messages & system status reporting. Solution: After days of research and applying my previous experiences I came up with the following solution using Scala, Spring MVC, AKKA, MySQL, Maven & Open MQ. There will be two MySQL instances 1. data-collector MySQL - used to store only the raw data collected from third party providers 2. core MySQL - used to store modelling results, email logs, system logs, users logins, etc etc. Requirement 1: Maven module data-collector. Will use data-collector MySQL instance only. This is responsible for collecting data from third party providers and should be implemented using Spring Batch or AKKA. I've read that Spring Batch does not support Scala and there is lack of documentation so I choose AKKA. data-collector will be stand alone application deployed onto an independent server where it's job is to translate the incoming data into a separate instance of MySQL then core MySQL instance. data-collector MySQL instance will store the data and our core system will invoke data-collector via an API to get financial data, social data, etc ... This module will also send new data arrival message to OpenMQ queue to which the data-analysis module is subscribed to, which will re-run the prediction using the new data. Requirement 2: Maven module data-analysis. Will use core MySQL instance only. Using a Scala Matlab library like Breeze which will be used to convert Matlab code to Scala for analysis and prediction. The prediction and analysis results will be stored in a core MySQL instance. This module will invoke the data-collector module to get access to the raw data. This module will also send messages to the OpenMQ instance to send emails mentioned in requirement 3. Requirement 3: Maven module messaging. Will use core MySQL instance only. This module will listen to an OpenMQ queue/broker which will be used to send various email messages to the client. Requirement 4: Maven module website. Will use core MySQL instance only. This module will be a front end website which the client can login to view their data in graphs and view reports produced by the analytical module. Requirement 5: Maven module admin. Will use core MySQL instance only. Admin website which the admin can login to manage the system. They manually re-run the analysis for a specific client for a specific date or resend an email and also view the system status. To run the analysis this module will NOT call the data-analysis module directly but will send a message to the OpenMQ message broker that the data-analysis module is listening to. Please help me and tell me if the above design is correct and simple ? ### TheoryOverflow #### What is this variant of set cover problem known as? Input is a universe U and a family of subsets of U, say, {\cal F} \subseteq 2^U. We assume that the subsets in {\cal F} can cover U, i.e., \bigcup_{E\in {\cal F}}E=U. An incremental covering sequence is a sequence of subsets in {\cal F}, say, {\cal A}=\{E_1,E_2,\ldots,E_{|{\cal A}|}\}, that satisfies 1) \forall E\in {\cal A}, E\in {\cal F}, 2) every newcomer has new contribution, i.e., \forall i>1, \bigcup_{j=1}^{i-1}E_i \subsetneq \bigcup_{j=1}^{i}E_i; The problem is to find an incremental covering sequence of maximum length (i.e., with maximum |{\cal A}|). Note that a maximum length sequence must eventually cover U, i.e., \bigcup_{E\in {\cal A}}E=U. I have attempted to find an algorithm or an approximate algorithm to find the longest incremental covering sequence. I was just wondering what this variant of set cover problem known as. Thank you! ### StackOverflow #### How to set a value to a global variable inside a function in javascript/angular? Well, I need to use a variable in different angular controllers or any function, which I can do, but in this particular case I'm getting the value of one of those variables from a service or a function, how you want to call it, this is the code //global variable userdata userdata = {}; ng.tcts.service('user', function(http, q, window, boots){ return{ login: function(data){ var deferred = q.defer(); http({ method:'POST', url:ng.api+'/log/in', data:{ data:data }, headers:{'Content-type':'application/x-www-form-urlencoded'} }).success(function(response, status, headers, config){ if(response.success){ console.log(response); //userdata assignation userdata = response.data; deferred.resolve([response.respnum, response.success, response.report, response.data]); } else{ deferred.resolve([response.respnum, response.success, response.report, response.data]); } }).error(function(response, status, headers, config){ deferred.reject([response.respnum, response.success, response.report]); }); return deferred.promise; } );  As you can see I'm asigining userdata the value of response.data inside the success function , when I use it in the function environment it works , the assignation works fine, but when I try to use it outside of it, the object is empty, I hope you can help me with this, thx ### Fefe #### Eine russische Tor Exit-Node ist beim Patchen der über ... #### Einem Haufen Hardware-Hacker wurde kürzlich über ... Einem Haufen Hardware-Hacker wurde kürzlich über Windows Update der Serial-Baustein ihrer Hardware gebrickt. Der Hintergrund ist, dass der Hersteller des Steines, eine schottische Firma namens FTDI, Treiber auf Windows Update hochgeladen hat, die Produktfälschungen erkennt und brickt. Das haben die auch so in ihre EULA geschrieben, die natürlich keiner gelesen hat. Update: Primärquelle. Update: Die Steine sind wiederbelebbar, insofern nicht gebrickt im Wortsinne. #### Fahrt ihr Ubuntu? Wollt ihr TLS 1.2 benutzen? Geht ... Fahrt ihr Ubuntu? Wollt ihr TLS 1.2 benutzen? Geht nicht? Das könnte hieran liegen: Unfortunately, because of the large number of sites which incorrectly handled TLS v1.2 negotiation, we had to disable TLS v1.2 on the client. Dass diese Debilianisten und ihre Abkömmlinge das aber auch nicht lernen, die Finger von OpenSSL-Patches zu lassen! ### DataTau #### Mobile Platform Statistics - Android vs. iOS ### StackOverflow #### Is there an alternative download location for Scala? [closed] I've been trying for awhile to grab the .MSI or .ZIP off http://www.scala-lang.org/downloads/distrib/files/*, but it's extremely slow and actually stops downloading after awhile. I've searched for mirrors, but even places like Softpedia merely link back to scala-lang.org. I was able to grab scala-library.jar from Akka.io, but I don't think that's sufficient for using the language (or I'm not instantiating it incorrectly with java -jar?). Ideas? ### /r/compsci #### Problem Solving with Algorithms vs Machine Learning Hello everyone! I am a university student and I have the chance to take either the course "Problem solving with Algorithms" or Machine Learning. Problem solving course will basically focus on ACM competition type questions and will give us a way to refine our skills. On the other hand Machine Learning I feel is an extremely interesting topic which has a huge potential going forward. I wanted to hear your opinion on these two topics for an undergrad student. Thank you in advance! cheers submitted by LupusPrudens [link] [11 comments] ### Planet Clojure #### Pre-Conj Interview: Anna Pawlicka Anna Pawlicka interview about Om. <λ> #### Pre-Conj Interview: Julian Gamble Julian Gamble interview about core.async and ClojureScript. <λ> ### CompsciOverflow #### Relative Importance in Graph Theory I am working on an algorithm that ranks a set of nodes in a graph with respect to how relative this node is to other predefined nodes (I call them query nodes). The way how the algorithm works is similar to recommendation algorithms. For instance, if i want to buy an item from an online store, the algorithm will look at my preferences (and/or history of purchased items) and recommend new items for me. Applying this to graph theory, the set of nodes are items and my preferred items are the query nodes. The problem am facing right now is how to benchmark my results (i.e. I want to run recall and precision on my results) but I don't have a ground truth data. My question is: does anyone know a benchmark for this problem, if not, how do you think I can evaluate my results. Note: My algorithm has nothing to do with recommendation algorithms (i.e. the application is different), but I gave this to deliver the general idea of the RELATIVE IMPORTANCE algorithms. I am looking for any dataset with benchmark that may help me in this context. Edit: Based on some requests, I will explain my algorithm with more details. The algorithm takes as input: graph (can be directed or undirected, weighted or unweighted), and a set of query nodes (included in the graph). The algorithm will try to rank the nodes in the graph according to their importance with respect to the query nodes. The importance of a node increases as the relationship between it and the query nodes increases. Depending on the application, this relationship is quantified by a value (the weight of an edge) that reflects the level of association between two nodes. For instance, in the DBLP co-authership dataset, the relation between two nodes is the number of common papers between the two nodes (authors). Therefore, in this case, the algorithm will rank the authors in the DBLP graph according to how close they are to all query nodes (the predefined authors). I hope that this is clear. Thank you ### StackOverflow #### How to customize serialization behavior without annotations in Salat? I'm using Salat library to serialize objects to be stored in MongoDb via Casbah. Sometimes I need to tune little bit how fields will be serialized, and Salat's Annotations is a pretty convenient way to do it. BUT, Is there any way to describe serialization parameters(Key, Ignore etc) not directly in case-classes(models) via Annotations, but in some external point, to keep my models clear from Salat dependency(aka POJO/POCO)? ### DataTau #### MIT computer scientists can predict the price of Bitcoin ### StackOverflow #### Disable MethodLengthChecker in one file in ScalaStyle I'm using scalastyle in my maven project build. I want to disable org.scalastyle.scalariform.MethodLengthChecker in one file. I have added comment "scalastyle:off" tag and my file looks like this: //scalastyle:off number.of.methods.in.type package ... imports..... class XXX { } //scalastyle:on number.of.methods.in.type  When I run mavn build I still get error: Number of methods in class exceeds 20 line=16 column=6 Do I doing something wrong? ### CompsciOverflow #### Why do we need "Bloom Filters" if we can use hash tables? A Bloom filter is a probabilistic data structure designed to tell, rapidly and memory-efficiently, whether an element is in the set or no. If we can use hash tables where we have O(1) in best time, O(n) in a very bad situations. Why do we need a new abstract data structure that look-up for an element with less certainty. What are the scenarios where HashTables fails and demands the use of bloom filters? When should bloom filters be used over hashtables and vice-versa? ### /r/emacs #### How to import Gmail contacts to use with mu4e? I've seen that the only solution seemed to be using org-contacts... but there isn't more information about this. mu4e doc just says "it should work", without any more guideline on how to do that. And I haven't found any link on how to import gmail contacts in org-contacts. I'm kinda lost there... my google-fu seems to be failing me! submitted by fmargaine [link] [6 comments] ### QuantOverflow #### PerformanceAnalytics and Annual Charting I have seen a charts that look's like Is it possible to do something like this with PerformanceAnalytics or is there any other package for doing this? Thanks in advance #### What is the name of this product? Consider the payoff =S_T1_{S_T>K} where S_T is the asset price at maturity. What is this type derivative called? and is it a liquid option? ### UnixOverflow #### How to dual boot PC-BSD 10.3 (with zfs file system) and debian 7 (crunchbang) using grub2 boot loader in MBR? I want to dual boot PC-BSD 10.3 with ZFS as the root file system (the only file system) with Debian 7 (crunchbang linux) with ext4 using grub2 boot loader installed in MBR and grub is managed by the debian. All the documents are dealing with dual booting PC-BSD/FreeBSD using UFS2 file system and Debian(How do I add PC BSD / FreeBSD to Grub 2 boot loader?). So I'm asking the question here. From my debian's grub configuration cat /etc/grub.d/40_custom #!/bin/sh exec tail -n +3 0 # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. menuentry "PC-BSD" { insmod zfs set root=(hd0,2) chainloader +1 }  This entry is being detected by the grub and showing in the boot screen. But when I select PC-BSD, it is showing the error "UFS not found". I think this is because PC-BSD 10.3 is using ZFS instead of UFS2. Please provide me the guide to boot PC-BSD with ZFS and debian using grub2 managed by debian. ### Planet Theory #### TR14-136 | Classical Automata on Promise Problems | Viliam Geffert, Abuzer Yakaryilmaz Promise problems were mainly studied in quantum automata theory. Here we focus on state complexity of classical automata for promise problems. First, it was known that there is a family of unary promise problems solvable by quantum automata by using a single qubit, but the number of states required by corresponding one-way deterministic automata cannot be bounded by a constant. For this family, we show that even two-way nondeterminism does not help to save a single state. By comparing this with the corresponding state complexity of alternating machines, we then get a tight exponential gap between two-way nondeterministic and one-way alternating automata solving unary promise problems. Second, despite of the existing quadratic gap between Las Vegas realtime probabilistic automata and one-way deterministic automata for language recognition, we show that, by turning to promise problems, the tight gap becomes exponential. Last, we show that the situation is different for one-way probabilistic automata with two-sided bounded-error. We present a family of unary promise problems that is very easy for these machines; solvable with only two states, but the number of states in two-way alternating or any simpler automata is not limited by a constant. Moreover, we show that one-way bounded-error probabilistic automata can solve promise problems not solvable at all by any other classical model. #### TR14-135 | Sign rank, VC dimension and spectral gaps | Shay Moran, Amir Yehudayoff, Noga Alon We study the maximum possible sign rank of N \times N sign matrices with a given VC dimension d. For d=1, this maximum is 3. For d=2, this maximum is \tilde{\Theta}(N^{1/2}). Similar (slightly less accurate) statements hold for d>2 as well. We discuss the tightness of our methods, and describe connections to combinatorics, communication complexity and learning theory. We also provide explicit examples of matrices with low VC dimension and high sign rank. Let A be the N \times N point-hyperplane incidence matrix of a finite projective geometry with order n \geq 3 and dimension d \geq 2. The VC dimension of A is d, and we prove that its sign rank is larger than N^{\frac{1}{2}-\frac{1}{2d}}. The large sign rank of A demonstrates yet another difference between finite and real geometries. To analyse the sign rank of A, we introduce a connection between sign rank and spectral gaps, which may be of independent interest. Consider the N \times N adjacency matrix of a \Delta regular graph with a second eigenvalue in absolute value \lambda and \Delta \leq N/2. We show that the sign rank of the signed version of this matrix is at least \Delta/\lambda. A similar statement holds for all regular (not necessarily symmetric) sign matrices. We also describe limitations of this approach, in the spirit of the Alon-Boppana theorem. #### TR14-134 | Parameterized Complexity of CTL: Courcelle&#39;s Theorem For Infinite Vocabularies | Martin Lück, Arne Meier, Irina Schindler We present a complete classification of the parameterized complexity of all operator fragments of the satisfiability problem in computation tree logic CTL. The investigated parameterization is temporal depth and pathwidth. Our results show a dichotomy between W[1]-hard and fixed-parameter tractable fragments. The two real operator fragments which are in FPT are the fragments containing solely AF, or AX. Also we prove a generalization of Courcelle's theorem to infinite vocabularies which will be used to proof the FPT-membership cases. #### TR14-133 | Mutual Dimension | Adam Case, Jack H. Lutz We define the lower and upper mutual dimensions mdim(x:y) and Mdim(x:y) between any two points x and y in Euclidean space. Intuitively these are the lower and upper densities of the algorithmic information shared by x and y. We show that these quantities satisfy the main desiderata for a satisfactory measure of mutual algorithmic information. Our main theorem, the data processing inequality for mutual dimension, says that, if f:\mathbb{R}^m \rightarrow \mathbb{R}^n is computable and Lipschitz, then the inequalities mdim(f(x):y) \leq mdim(x:y) and Mdim(f(x):y) \leq Mdim(x:y) hold for all x \in \mathbb{R}^m and y \in \mathbb{R}^t. We use this inequality and related inequalities that we prove in like fashion to establish conditions under which various classes of computable functions on Euclidean space preserve or otherwise transform mutual dimensions between points. ### CompsciOverflow #### Undecidability in the context of modern programming languages Imagine a program, executed by an interpreter to be a Turing Machine. Consider this code: x = read_input print x  Does undecidability mean that there may possibly be an input to this program such that the program may never halt? ### StackOverflow #### Can I install DCEVM with Oracle Java 7 HotSpot VM? I'm running Oracle Java 7 on MacOS (OSX 10.7.5) java -version: Java version "1.7.0_40" Java(TM) SE Runtime Environment (build 1.7.0_40-b43) Java HotSpot(TM) 64-Bit Server VM (build 24.0-b56, mixed mode)  I recently read about DCEVM and am very curious to try it out. However, I am confused about its compatibility. According to the binaries download page, there are binaries for OpenJDK Java 7 update 51, build 3. I'm not quite sure how that relates to the Java version I currently have on my machine. Does the install JRE need to match the DCEVM Hotspot version? Do I need to install OpenJDK 7_51_3 to be able to use the DCEVM in question? Or can I install the hotspot VM with my Oracle JRE? I'm not entirely sure exactly how all the utilities that come with a JRE/JDK interact with the HotSpot VM and if they all have to be of the same build or not, given that compiled byte code should be able to run on any JVM of the same major build number. Can anyone provide a little insight how all these components fit together? As a followup, does anyone know if/how/where I can find a compatible version of OpenJDK for OSX on which I can run DCEVM for Java 7 update 51, build 3?? ### /r/netsec #### CVE-2014-4114 can be used without SMB Share ### /r/scala #### sbt console: getting value of defined function Hello, how would I get the value of the defined function addOne in the session below? simply typing addOne into the console did not return the value of the function, while typing res1 returned the value of the anonymous function. scala> def addOne(m: Int): Int = m + 1 addOne: (m: Int)Int scala> val three = addOne(2) three: Int = 3 scala> (x: Int) => x + 1 res1: Int => Int = <function1> scala> addOne <console>:9: error: missing arguments for method addOne; follow this method with _' if you want to treat it as a partially applied function addOne ^ scala> res1 res3: Int => Int = <function1>  submitted by metaperl [link] [5 comments] ### CompsciOverflow #### The meaning of * in regular expressions I'm designing a Turing machine that decides a language denoted by a regular expression. Let's say this expression is a^*bbc^*. Does this machine accept the string bb since a^* and c^* can have zero instances or more? ### TheoryOverflow #### Is graph coloring complete for poly-APX? Is the graph coloring problem complete for poly-APX under C-reductions (alternatively, under AP-reductions)? For the graph coloring problem, speaking of a feasible solution means a proper coloring for all vertices of the given graph. The complexity class poly-APX contains all NP optimization problems that can be approximated within a factor that is polynomial in the size of the input. The notion of C-reducibility concerns approximation preserving reductions, which keep the performance ratio of the feasible solutions under consideration within a linear factor. For definitions regarding approximation preserving reducibilities, see P. Crescenzi: A short guide to approximation preserving reducibilites, CCC '97. EDIT (1.8.2014): Somewhat related, I've found in the paper "on syntactic versus computational views of approximability" by Khanna, Motwani, Sudan and Vazirani (SICOMP 28(1):164-191) a remark stating that GRAPH COLORING and MAX CLIQUE are both in poly-APX-PB and interreducible (Remark 6 in that paper). I understand this is meant with respect to E-reducibility defined in that paper. Later, in the sketch of proof of Theorem 6 in that paper, I understand that they imply that MAX CLIQUE is complete for poly-APX-PB under E-reductions. I would also be grateful for a proof that GRAPH COLORING is complete for poly-APX-PB w.r.t. E-reducibility. ### CompsciOverflow #### Converting DFA to regular expression I have the following DFA. (Yellow stated are accepting states.) I want to eliminate states step by step to find the regular expression of it. In my steps, I think there is a bug because I do not know what to do with state 4. If you know how to convert this DFA to regular expression please help me. ### /r/freebsd #### Swap + virtual machines Is there a similar concept to Linux's "swappiness" and if so how do I set it? My situation is that I have many FreeBSD virtual machines (guests) and I find that the available swap goes from 100% to 98% or 99% after a few hours of uptime, even though the load/usage of the machine is consistently low. I would like to have 0% swap usage if possible to minimize disk I/O. Any advice or corrections to my understanding would be appreciated. submitted by earlof711 [link] [17 comments] ### QuantOverflow #### List of 2008 NACE Rev 2 codes Am looking for a simple list of the NACE 2008 rev 2 codes (The European classifications for economic sectors). The official publication is here, but is there an easily accessible list of the actual codes available anywhere? ### CompsciOverflow #### Reason to learn propositional & predicate logic I can understand the importance that computer scientists or any software development related engineers should have understood the study of basic logics as a basis. But is there any tasks/jobs that explicitly require the knowledge about these, other than the tasks that require any kind of knowledge representation using Knowledge Base? I want to hear the types of tasks, rather than conceptual responses. The reason I ask this is just from my curiosity. While CS students have to spend certain amount of time on this subject, some practicality-intensive courses (e.g. AI-Class) skipped this topic entirely. And I just wonder that for example knowing predicate logic might help drawing ER diagram but might not be a requirement. Update 5/27/2012) Thanks for answers. Now I think I totally understand & agree with the importance of logicin CS with its vast amount of application. I just picked the best answer truly from the impressiveness that I got by the solution for Windows' blue screen issue. ### StackOverflow #### Is this "injecting function (parser) to the anonymous implementation of Action trait"? I don't quite understand this syntax (the one with the red square). Is this a case of "injecting function (parser) to the anonymous implementation of Action trait". I've tried googling to confirm / discard that, but haven't found any article with the answer. Or... is there a companion object (in Play 2) whose name is "Action", whose apply method takes an (optional) BodyParser parameter? Thanks in advance for clearing this up for me! Best regards, Raka #### Simple Erlang example won't run (error) Good Afternoon Guys, I've recently took an interest in Erlang and functional programming. I'm trying to run this simple hello world example without opening the Erlang shell. I'm able to run it successfully in MACOSX (Yosemite) but, I'd like to use my Fedora 20 VM instead. So in Fedora (Linux) (and even Windows 7) I get the following error when trying to run the compiled beam: {"init terminating in do_boot",{undef,[{heythere,start,[],[]},{init,start_it,1,[]},{init,start_em,1,[]}]}} Crash dump was written to: erl_crash.dump init terminating in do_boot ()  These are the switches I use to run the file: erl -noshell -start -s heythere -init -stop  I even substituted the "-s" switch and used "-run" to no avail. I'm able to run the modules within the shell but NOT outside of it. Is there something I'm doing wrong? Code: -module(heythere). -export([hello_world/0, this_function/0, both/0]). hello_world() -> io:fwrite("hello, world\n"). this_function() -> io:fwrite("This is another function...ok~n"). both() -> hello_world(), this_function().  I tried looking in the erl_crash.dump but it's over 1000+ lines long and I can't make heads or tails of it. :-( Thank you guys so much in advance. #### Rapidly duplicate images on FreeBSD How can I fast duplicate images located in a folder? I usually used this command: cp -R -p path/to/folder path/to/another/folder  But, because of high number of images in path/to/folder, operation takes too much time. How can I done this task faster? Is there an alternative solution? #### In Clojure, how do I configure Korma and Ragtime to use the same database? I'm trying to work with databases in Clojure. At this point, I want to use Ragtime to modify the database schema itself, Korma to query and insert data, and H2 as the actual database. I think I'm using them properly, but I'm getting an error when I try to use Korma to access a table. Here's my project.clj: (defproject dbexplore "0.1.0-SNAPSHOT" :dependencies [[org.clojure/clojure "1.6.0"] [korma "0.4.0"] [com.h2database/h2 "1.4.182"] [ragtime "0.3.7"]] :plugins [[ragtime/ragtime.lein "0.3.7"]] :ragtime {:migrations ragtime.sql.files/migrations :database "jdbc:h2:/home/zck/Documents/dbexplore/resources/db/dbexplore.db"} :main dbexplore.core)  So I'm importing korma, h2database, and ragtime. I'm not sure it's pointing the Ragtime migration at the proper database location. I created a migration file with this as the contents: create table users (id INT, first varchar(32), last varchar(32));  And then ran it: zck@zck-desktop:~/Documents/dbexplore lein ragtime migrate Applying 2014-10-22-2-11-create-tables  I made a simple core.clj file that just selects everything from the users table: (ns dbexplore.core (:require [korma.db :as db] [korma.core])) (def db-connection (db/h2 {:db "./resources/db/dbexplore.db"})) (db/defdb korma-db db-connection) (korma.core/defentity users) (defn -main [] (korma.core/select users))  But upon running it with lein run, I get an error: Failure to execute query with SQL: SELECT "users".* FROM "users" :: [] JdbcSQLException: Message: Table "users" not found; SQL statement: SELECT "users".* FROM "users" [42102-182] SQLState: 42S02 Error Code: 42102 Exception in thread "main" org.h2.jdbc.JdbcSQLException: Table "users" not found; SQL statement: SELECT "users".* FROM "users" [42102-182], compiling:(/tmp/form-init7833348906040195763.clj:1:90)  My suspicion is that I'm pointing at a different database file in the h2 call in core.clj from the one ragtime is migrating, but I'm not sure how to specify it properly. How can I make these two libraries use the same database? ### QuantOverflow #### What machine learning method is more suitable for prediction of financial time series? [on hold] I have some time series from a stock exchange market. For each of them, I want to answer the question that whether the price will grow at least p percent in the d coming days or NOT(and during these days, it will not fall under some percent)? Till now I have implemented a method using SVM but it seems that there are better machine learning methods like HM-SVM. Which method is more suitable for this task? In addition, I am not familiar with (Hidden) Conditional Random Field. Is it suitable for my goal? ### StackOverflow #### Comparing lists - checking if one list is a segment of second one Good morning everyone (or good evening :)), I have problem with checking if one list is segment of second one. All elements of first list must be the first elements of second list. For example L1=[1,2,3,4] L2=[3,4,5,6] >false L1=[1,2,3,4] L2=[1,2,3,4,5,6,7,8] >true L1=[-1,-7,3,2] L2=[1,2,3,4,5] >false  I know it would be easy to use loop and then comparing elements, but i want to do it in functional way. I had idea, but it was not clever, to stick both lists (with zip, for example), then unzip it, chenge to set and compare if it has the same length then second list, but that method do not look good. I'm new in Scala, so sorry for propably stupid question. I don't please for code, but for some advices! :) Thanks guys! #### Why are the scala docs missing methods? I have been trying to learn scala. One thing that I have noticed is the quality of the docs. They seem to miss out on a lot of methods. Is this intentional? I feel like I am missing something because they can't be this bad. For example: Blog post on reading files with scala. The blog post recommends using a scala.io.Source.fromFile(..) method to read a file. It provides an iterator. Looks very nice to use. I want to get a better understanding of the class, so I go to the scala docs on scala.io.Source. No where in the docs does it show the method for scala.io.Source.fromFile(..). When I go to my IDE, it does try to autocomplete Source.fromFile(..), and it even works in the code. This happened to me before when I was trying to use scala's database api. Am I missing something? Is there a secret button that pulls up this method? Have I gone my whole life being blind without realizing it? Or are the scaladocs really this bad? #### How to profile set of process in freeBSD? I am trying to debug a service with respect to its performance. The service I am trying to debug, internally spawns instances of the same binary. To improve the through-put, I am planning to increase number of instances of the binary. After a point in number of processes of the binary, through-put is not increasing. Now I am trying to reason-out why this is happening. I need some help on where to start, tools available for process level profiling. I am using freeBSD platform. #### How select akka actor with actorSelection? I am trying to select one actor which have already created. Here is a code: val myActor = system.actorOf(Props(classOf[MyActor]), "myActorName") println("myActor path - " + akka.serialization.Serialization.serializedActorPath(myActor)) println("Selection from spec akka://unit-test/user/myActorName " + system.actorSelection("akka://unit-test/user/myActorName").resolveOne().value) println("Selection from spec /user/myActorName/ " + system.actorSelection("/user/myActorName/").resolveOne().value)  The result is: myActor path - akka.tcp://unit-test@127.0.0.1:46635/user/myActorName#1444872428 Selection from spec akka://unit-test/user/myActorName None Selection from spec /user/myActorName/ None  Also I can pass a message to actor and it completes well. What I missed during actorSelection? How to select actor properly? UPDATED It is very strange, but when I replace system.actorSelection("/user/myActorName/").resolveOne().value with system.actorFor("/user/myActorName/") everything works. I mean actorFor returns an actor. (Which is not a right solution due to actorFor is deprecated) ### Lobsters #### Should I Read Papers? ### /r/netsec #### CVE-2014-4113 Detailed Vulnerability and Patch Analysis ### CompsciOverflow #### k-center algorithm in one-dimensional space I'm aware of the general k-center approximation algorithm, but my professor (this is a question from a CS class) says that in a one-dimensional space, the problem can be solved (optimal solution found, not an approximation) in O(n^2) polynomial time without depending on k or using dynamic programming. A general description of the k-center problem: Given a set of nodes in an n-dimensional space, cluster them into k clusters such that the "radius" of each cluster (distance from furthest node to its center node) is minimized. A more formal and detailed description can be found at http://en.wikipedia.org/wiki/Metric_k-center As you might expect, I can't figure out how this is possible. The part currently causing me problems is how the runtime can not rely on k. The nature of the problem causes me to try to step through the nodes on a sort of number line and try to find points to put boundaries, marking off the edges of each cluster that way. But this would require a runtime based on k. The O(n^2) runtime though makes me think it might involve filling out an nxn array with the distance between two nodes in each entry. Any explanation on how this is works or tips on how to figure it out would be very helpful. ### Portland Pattern Repository #### Wiki Links Of Interest (by shark.armchair.mb.ca 40 hours ago) ### DataTau #### Why R users should try GraphLab Create #### Alumni Spotlight: Alex Mentch, Data Scientist at Facebook ### Wondermark #### #1071; Old Dog, Oldest Trick ### StackOverflow #### Running Java gives "Error: could not open C:\Program Files\Java\jre6\lib\amd64\jvm.cfg'" After years of working OK, I'm suddenly getting this message when trying to start the JVM: Error: could not open C:\Program Files\Java\jre6\lib\amd64\jvm.cfg'  I tried uninstalling, and got a message saying a DLL was missing (unspecified) Tried re-installing, all to no avail. At the same time, when trying to start Scala I get: \Java\jdk1.6.0_25\bin\java.exe was unexpected at this time.  Checked %JAVA_HOME% and %path% - both OK Can anyone help? ### /r/compsci #### Comp Sci HW Please Help :python prompt: Write a loop that reads positive integers from standard input, printing out those values that are greater than 100, each on a separate line, The loop terminates when it reads an integer that is not positive. my code: i = int(input("please enter a number")) while i >= 0: if i > 100: print(i) i = int(input("please type in a number")) It works in python shell but not in turingscraft submitted by ZAMbullo123 [link] [1 comment] ### QuantOverflow #### Best way to store hourly/daily options data for research purposes There are quite a few discussions here about storage, but I can't find quite what I'm looking for. I'm in need to design a database to store (mostly) option data (strikes, premiums bid / ask, etc.). The problem I see with RDBMS is that given big number of strikes tables will be enormously long and, hence, result in slow processing. While I'm reluctant to use MongoDB or similar NoSQL solution, for now it seems a very good alternative (quick, flexible, scalable). 1. There is no need in tick data, it will be hourly and daily closing prices & whatever other parameters I'd want to add. So, no need for it to be updated frequently and writing speed is not that important. 2. The main performance requirement is in using it for data mining, stats and research, so it should be as quick as possible (and preferably easy) to pull and aggregate data from it. I.e., think of 10-year backtest which performs ~100 transactions weekly over various types of options or calculating volatility swap over some extended period of time. So the quicker is better. 3. There is lots of existent historical data which will be transferred into the database, and it will be updated on a daily basis. I'm not sure how much memory exactly it will take, but AFAIK memory should not be a constraint at all. 4. Support by popular programming languages & packages (C++, Java, Python, R) is very preferable, but would not be a deal breaker. Any suggestions? ### Wes Felter #### CPU World: Intel Broadwell-E delayed even more ### CompsciOverflow #### How CPU know which process generated an interrupt? When a computer OS supports multi-program, it needs to have a scheduling algorithm to handle which process will be run by the cpu. If a process is in state 'blocked' waiting for IO, the scheduling causes another process to be taken by the cpu while the process in state 'blocked' is waiting for a response I/O. In this situation, when the process returns to state 'ready' and then runs, how does the cpu know which process to run and where it was stopped ? ### Lobsters #### Systems Software Research is Irrelevant ### StackOverflow #### Scala compile server error when using nailgun I am currently using IntelliJ Idea 13.0 Build 132.197, and I frequently run into this problem when building any Scala projects 6:08:42 PM Scala compile server: java.net.BindException: Address already in use: JVM_Bind at java.net.DualStackPlainSocketImpl.bind0(Native Method) at java.net.DualStackPlainSocketImpl.socketBind(DualStackPlainSocketImpl.java:106) at java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:376) at java.net.PlainSocketImpl.bind(PlainSocketImpl.java:190) at java.net.ServerSocket.bind(ServerSocket.java:376) at java.net.ServerSocket.(ServerSocket.java:237) at com.martiansoftware.nailgun.NGServer.run(Unknown Source) at java.lang.Thread.run(Thread.java:724)  This error happens when I have only 1 project open, and without using nailgun in any other processes. Closing and re-opening the project does not work. I still get the same error after Exiting from IntelliJ and re-starting it. In Windows Task Manager, I see a java.exe process with the following details: E:\Dev\Java\bin\java -cp "E:/Dev/Java/lib/tools.jar;C:/Program Files (x86)/JetBrains/IntelliJ IDEA 132.197/lib/jps-server.jar;C:/Program Files (x86)/JetBrains/IntelliJ IDEA 132.197/lib/trove4j.jar;C:/Program Files (x86)/JetBrains/IntelliJ IDEA 132.197/lib/util.jar;C:/Users/hanxue/.IntelliJIdea13/config/plugins/Scala/lib/scala-library.jar;C:/Users/hanxue/.IntelliJIdea13/config/plugins/Scala/lib/scala-plugin-runners.jar;C:/Users/hanxue/.IntelliJIdea13/config/plugins/Scala/lib/jps/nailgun.jar;C:/Users/hanxue/.IntelliJIdea13/config/plugins/Scala/lib/jps/sbt-interface.jar;C:/Users/hanxue/.IntelliJIdea13/config/plugins/Scala/lib/jps/incremental-compiler.jar;C:/Users/hanxue/.IntelliJIdea13/config/plugins/Scala/lib/jps/jline.jar;C:/Users/hanxue/.IntelliJIdea13/config/plugins/Scala/lib/jps/scala-jps-plugin.jar" -Xmx1024m -server -Xss1m -XX:MaxPermSize=256m org.jetbrains.plugins.scala.nailgun.NailgunRunner 3200  Is this because of an issue with Nailgun settings? ### XKCD #### Houston ### Planet Theory #### An Alternative to the Seddighin-Hajiaghayi Ranking Methodology [Update 10/24/14: there was a bug in the code I wrote yesterday night, apologies to the colleagues at Rutgers!] [Update 10/24/14: a reaction to the authoritative study of MIT and the University of Maryland. Also, coincidentally, today Scott Adams comes down against reputation-based rankings] Saeed Seddighin and MohammadTaghi Hajiaghayi have proposed a ranking methodology for theory groups based on the following desiderata: (1) the ranking should be objective, and based only on quantitative information and (2) the ranking should be transparent, and the methodology openly revealed. Inspired by their work, I propose an alternative methodology that meets both criteria, but has some additional advantages, including having an easier implementation. Based on the same Brown University dataset, I count, for each theory group, the total number of letters in the name of each faculty member. Here are the results (apologies for the poor formatting): 1 ( 201 ) Massachusetts Institute of Technology 2 ( 179 ) Georgia Institute of Technology 3 ( 146 ) Rutgers – State University of New Jersey – New Brunswick 4 ( 142 ) University of Illinois at Urbana-Champaign 5 ( 141 ) Princeton University 6 ( 139 ) Duke University 7 ( 128 ) Carnegie Mellon University 8 ( 126 ) University of Texas – Austin 9 ( 115 ) University of Maryland – College Park 10 ( 114 ) Texas A&M University 11 ( 111 ) Northwestern University 12 ( 110 ) Stanford University 13 ( 108 ) Columbia University 14 ( 106 ) University of Wisconsin – Madison 15 ( 105 ) University of Massachusetts – Amherst 16 ( 105 ) University of California – San Diego 17 ( 98 ) University of California – Irvine 18 ( 94 ) New York University 19 ( 94 ) State University of New York – Stony Brook 20 ( 93 ) University of Chicago 21 ( 91 ) Harvard University 22 ( 91 ) Cornell University 23 ( 87 ) University of Southern California 24 ( 87 ) University of Michigan 25 ( 85 ) University of Pennsylvania 26 ( 84 ) University of California – Los Angeles 27 ( 81 ) University of California – Berkeley 28 ( 78 ) Dartmouth College 29 ( 76 ) Purdue University 30 ( 71 ) California Institute of Technology 31 ( 67 ) Ohio State University 32 ( 63 ) Brown University 33 ( 61 ) Yale University 34 ( 54 ) University of Rochester 35 ( 53 ) University of California – Santa Barbara 36 ( 53 ) Johns Hopkins University 37 ( 52 ) University of Minnesota – Twin Cities 38 ( 49 ) Virginia Polytechnic Institute and State University 39 ( 48 ) North Carolina State University 40 ( 47 ) University of Florida 41 ( 45 ) Rensselaer Polytechnic Institute 42 ( 44 ) University of Washington 43 ( 44 ) University of California – Davis 44 ( 44 ) Pennsylvania State University 45 ( 40 ) University of Colorado Boulder 46 ( 38 ) University of Utah 47 ( 36 ) University of North Carolina – Chapel Hill 48 ( 33 ) Boston University 49 ( 31 ) University of Arizona 50 ( 30 ) Rice University 51 ( 14 ) University of Virginia 52 ( 12 ) Arizona State University 53 ( 12 ) University of Pittsburgh I should acknowledge a couple of limitations of this methodology: (1) the Brown dataset is not current, but I believe that the results would not be substantially different even with current data, (2) it might be reasonable to only count the letters in the last name, or to weigh the letters in the last name by 1 and the letters in the first name by 1/2. If there is sufficient interest, I will post rankings according to these other methodologies. ### /r/compsci #### Is computer science degree manageable? I am going to start my first year uni at RMIT doing the Comp Science course. I kept reading around the web with everyone asking how hard computer science is with 99% replying it is HARD with the fact that you need to be really good at maths etc etc. Is that really true? I am so scared :( btw, i am studying for my year 12s exam starting in 4 days. so scared !! submitted by nnkc911 [link] [1 comment] #### Post Grad Learning Computer Science Hello all, I am wanting to learn more about computer science, but I am not sure how to go about doing it. I recently received my master's in biochem. While a grad student, I took a sequencing class where we learned linux and I found it really interesting. At this point in my education, would it be best to try to take classes at a community college? Would I be able to take graduate level courses or would that be too intense? I did my undergraduate degree in forensic science, so I am also considering computer forensics programs. Any advise is greatly appreciated! submitted by tuff_ghost88 [link] [2 comments] ### StackOverflow #### Adding Java Project to my Scala Project Similar to this question, Troubles with importing java package to scala project (IntelliJ 10.5.2) how do I add a java project I have created into a scala project? ### Wondermark #### “Sea Lion” Has Been Verbed My comic from last month about The Terrible Sea Lion has really struck a chord! It’s been mentioned by the Independent (above), Slate, VentureBeat, Feministe, and cited in a ton of blog posts. That’s really neat to see! I’m happy that it’s resonated with so many people. So I thought this would be fun: for just a week (through October 31 only) I’m making sea lion shirts! Different colors (and even hoodies) are available too! And a tie-dye shirt because WHY NOT. Now of course I should note: reasonable people can disagree. But c’mon I don’t even know how that last one RELATES ### /r/emacs #### Emacs Lisp implemented in Common Lisp? I keep seeing a million implementations of Common Lisp for Emacs Lisp. Has anybody ever tried implementing Emacs Lisp in Common Lisp? Might sound pointless, but it'd be a neat place for someone to toy around with porting Emacs to Common Lisp. submitted by Sodel-The-Vociferous [link] [14 comments] ### CompsciOverflow #### Lamport Timestamps: When to Update Counters In the timepiece (excuse the pun) that is Time, Clocks and the Ordering of Events, Lamport describes the logical clock algorithm as the following: 1. Each process Pi increments Ci between any two successive events. 2. If event a is the sending of a message m by process Pi, then the message m contains a timestamp Tm = Ci(a). 3. Upon receiving a message m, process Pi sets Ci greater than or equal to its present value and greater than Tm. However, the algorithm as it is described on Wikipedia (and other websites) is a little different: 1. A process increments its counter before each event in that process. 2. When a process sends a message, it includes its counter value with the message. 3. On receiving a message, the receiver process sets its counter to be greater than the maximum of its own value and the received value before it considers the message received. This leaves me with the following questions: 1. Should be increment the counter before sending a message, as the sending of a message is itself an event. This incremented timestamp is the value that is sent with the message. 2. When a message is received by process Pi Lamport states that Pi logical clock should be set to max(Tm + 1, Ci). However, the Wikipedia article says that this should be max(Tm, Ci) + 1. Is Wikipedia wrong? ### /r/netsec #### The Insecurity of Things: owning a "smart" home hub ### /r/emacs #### Online dictionary in Emacs 24.4 #### Welcome to The Dark Side: Switching to Emacs ### arXiv Programming Languages #### Parallel Prefix Polymorphism Permits Parallelization, Presentation & Proof. (arXiv:1410.6449v1 [cs.PL]) Polymorphism in programming languages enables code reuse. Here, we show that polymorphism has broad applicability far beyond computations for technical computing: parallelism in distributed computing, presentation of visualizations of runtime data flow, and proofs for formal verification of correctness. The ability to reuse a single codebase for all these purposes provides new ways to understand and verify parallel programs. #### Justifying the small-world phenomenon via random recursive trees. (arXiv:1410.6397v1 [cs.DM]) We present a new technique for proving logarithmic upper bounds for diameters of evolving random graph models, which is based on defining a coupling between random graphs and variants of random recursive trees. The advantage of the technique is three-fold: it is quite simple and provides short proofs, it is applicable to a broad variety of models including those incorporating preferential attachment, and it provides bounds with small constants. We illustrate this by proving, for the first time, logarithmic upper bounds for the diameters of the following well known models: the forest fire model, the copying model, the PageRank-based selection model, the Aiello-Chung-Lu models, the generalized linear preference model, directed scale-free graphs, the Cooper-Frieze model, and random unordered increasing k-trees. Our results shed light on why the small-world phenomenon is observed in so many real-world graphs. #### Bar Recursion over Finite Partial Functions. (arXiv:1410.6361v1 [cs.LO]) We introduce a new, demand-driven variant of Spector's bar recursion in the spirit of the Berardi-Bezem-Coquand functional. The bar recursion takes place over finite partial functions, where the control parameter \varphi, used in Spector's bar recursion to terminate the computation at sequences s satisfying \varphi(\hat{s})<|s|, now acts as a guide for deciding exactly where to make bar recursive updates, terminating the computation whenever \varphi(\hat{u})\in \rm{dom}(u). We begin by examining the computational strength of this new form of recursion. We prove that it is primitive recursively equivalent to the original Spector bar recursion, and thus in particular exists in the same models. Then, in the main part of the paper, we show that demand-driven bar recursion can be used to give an alternative functional interpretation of classical countable choice. We use it to extract a new bar recursive program from the proof that there is no injection from \mathbb{N}\to\mathbb{N} to \mathbb{N}, and this turns out to be both more intuitive and for many inputs more efficient that the usual program obtained using Spector bar recursion. #### Perturbation analysis of a nonlinear equation arising in the Schaefer-Schwartz model of interest rates. (arXiv:1410.6321v1 [q-fin.CP]) We deal with the interest rate model proposed by Schaefer and Schwartz, which models the long rate and the spread, defined as the difference between the short and the long rates. The approximate analytical formula for the bond prices suggested by the authors requires a computation of a certain constant, defined via a nonlinear equation and an integral of a solution to a system of ordinary differential equations. In this paper we use perturbation methods to compute this constant. Coefficients of its expansion are given in a closed form and can be constructed to arbitrary order. However, our numerical results show that a very good accuracy is achieved already after using a small number of terms. #### Enhanced TKIP Michael Attacks. (arXiv:1410.6295v1 [cs.CR]) This paper presents new attacks against TKIP within IEEE 802.11 based networks. Using the known Beck-Tews attack, we define schemas to con- tinuously generate new keystreams, which allow more and longer arbitrary packets to be injected into the network. We further describe an attack against the Michael message integrity code, that allows an attacker to concatenate a known with an unknown valid TKIP packet such that the unknown MIC at the end is still valid for the new entire packet. Based on this, a schema to decrypt all traffic that flows towards the client is described. #### Verifying linearizability: A comparative survey. (arXiv:1410.6268v1 [cs.LO]) Linearizability has become the key correctness criterion for concurrent data structures, ensuring that histories of the concurrent object under consideration are consistent, where consistency is judged with respect to a sequential history of a corresponding abstract data structure. Linearizability allows any order of concurrent (i.e., overlapping) calls to operations to be picked, but requires the real-time order of non-overlapping to be preserved. A history of overlapping operation calls is linearizable if at least one of the possible order of operations forms a valid sequential history (i.e., corresponds to a valid sequential execution of the data structure), and a concurrent data structure is linearizable iff every history of the data structure is linearizable. Over the years numerous techniques for verifying linearizability have been developed, using a variety of formal foundations such as refinement, shape analysis, reduction, etc. However, as the underlying framework, nomenclature and terminology for each method differs, it has become difficult for practitioners to judge the differences between each approach, and hence, judge the methodology most appropriate for the data structure at hand. We compare the major of methods used to verify linearizability, describe the main contribution of each method, and compare their advantages and limitations. #### A QoE-Based Scheduling Algorithm for UGS Service Class in WiMAX Network. (arXiv:1410.6154v1 [cs.NI]) To satisfy the increasing demand for multimedia services in broadband Internet networks, the WiMAX (Worldwide Interoperability for Microwave Acces) technology has emerged as an alternative to the wired broadband access solutions. It provides an Internet connection to broadband coverage area of several kilometers in radius by ensuring a satisfactory quality of service (QoS), it's an adequate response to some rural or inaccessible areas. Unlike DSL (Digital Subscriber Line) or other wired technology, WiMAX uses radio waves and can provide point-to-multipoint (PMP) and point-to-point (P2P) modes. In parallel, it's observed that in the opposite of the traditional quality evaluation approaches, nowadays, current researches focus on the user perceived quality, the existing scheduling algorithms take into account the QoS and many other parameters, but not the Quality of Experience (QoE). In this paper, we present a QoE-based scheduling solution in WiMAX network in order to make the scheduling of the UGS connections based on the use of QoE metrics. Indeed, the proposed solution allows controlling the packet transmission rate so as to match with the minimum subjective rate requirements of each user. Simulation results show that by applying various levels of mean opinion score (MOS) the QoE provided to the users is improved in term of throughput, jitter, packet loss rate and delay. #### Benchmarking Usability and Performance of Multicore Languages. (arXiv:1302.2837v2 [cs.DC] UPDATED) Developers face a wide choice of programming languages and libraries supporting multicore computing. Ever more diverse paradigms for expressing parallelism and synchronization become available while their influence on usability and performance remains largely unclear. This paper describes an experiment comparing four markedly different approaches to parallel programming: Chapel, Cilk, Go, and Threading Building Blocks (TBB). Each language is used to implement sequential and parallel versions of six benchmark programs. The implementations are then reviewed by notable experts in the language, thereby obtaining reference versions for each language and benchmark. The resulting pool of 96 implementations is used to compare the languages with respect to source code size, coding time, execution time, and speedup. The experiment uncovers strengths and weaknesses in all approaches, facilitating an informed selection of a language under a particular set of requirements. The expert review step furthermore highlights the importance of expert knowledge when using modern parallel programming approaches. ### StackOverflow #### A ReactiveMongo pattern to return the created mongodb document in one RESTful request Environment: Play! 2.2.3, ReactiveMongo 0.10.0-SNAPSHOT Suppose I have a page with a list of documents (let's say "projects") and a button that pops up a modal dialogue with fields to be filled in. Upon pressing the OK button the page sends a request with JSON body to the backend: { name: "Awesome Project", url: "https://github.com/ab/cd", repository: "git@github.com/ab/cd.git", script: "empty" }  The backend routes the request to the Action defined like this:  def projectsCollection: JSONCollection = db.collection[JSONCollection]("projects") def create = Action.async(parse.json) { request => projectsCollection.insert(request.body) map { case LastError(true,_,_,_,Some(doc),_,_) => Created(JsObject(List( "result" -> JsString("OK") , "doc" -> BSONFormats.toJSON(doc) ))) case LastError(false, err, code, msg, _, _, _) => NotAcceptable(JsObject(List( "result" -> JsString("ERROR"), "error" -> JsString(err.getOrElse("unknown")), "code" -> JsNumber(code.getOrElse[Int](0)), "msg" -> JsString(msg.getOrElse("no messsage")) ))) } }  The LastError case class has a parameter originalDocument: Option[BSONDocument] which is returned in the request response body, but it isn't the document I expected. I want the document with the BSONObjectID filled or at least the _id itself. Trying to retrieve the freshly created document led me into a dead end, because everything is wrapped into Future. How to write elegant code that does the task? ### /r/systems #### "Experimental Study of High Performance Priority Queues" [PDF, 2007] ### TheoryOverflow #### Lower bound proof for compressive sensing (Gel'fand widths)? Let x \in \mathbb{R}^n have k non-zero entries. The main insight of compressive sensing is that there exist m\times n matrices A with m = O(k \log n/k) such that any x can be recovered from Ax in polynomial time. A little thought shows that Ax must have at least \log \binom{n}{k} = \Theta(k \log n/k) bits. I believe a stronger statement is true as well, namely, that we must have m =\Omega(k \log n/k) measurements as well. I know the lower bound on m has something to do with Gel'fand widths, but am having a hard time finding a resource that lays out the argument explicitly. Either pointers to write-ups or a rough summary of the argument would be helpful. ### /r/compsci #### Resources to learn about how to automatically decompose an object oriented program into a procedural program? I've been tasked with developing a method of converting an arbitrary PHP program that contains object oriented functionality into a purely procedural PHP program. I know there are tools out there like Rose, txl, Statego, antlr, etc. that facilitate source-to-source transformation, but I have no idea how to go about using them to do the actual transformation. I.e. what sort of rules I need to write. Does anyone have any links to general sources of information on how to translate object oriented code into procedural code. How does one simulate object oriented features in a procedural way? It's been pointed out to me that I should consult "Object-Oriented Programming With ANSI-C" but it's a difficult read (for me) and I am having trouble making it past the beginnings of it (my ANSI C is rusty). I was hoping there would be some easier to understand resources out there. I was told that there's a lot of good information "out there" about translating OO to procedural code but I am having trouble finding it. Thanks for any pointers to any resources. Maybe this project is too difficult for me, I don't know, but I am feeling pretty discouraged right now and any pointers to good resources would be much appreciated. submitted by rumblebell [link] [2 comments] ### Lobsters #### Verizon injecting UIDs into all HTTP traffic on its wireless network, regardless of opt-out ### Planet Clojure #### Welcome to The Dark Side: Switching to Emacs I have to start this post by saying I’ve been a dogmatic Vim partisan since the 1990’s, when I started using vi on the Solaris and Irix boxen I had access to, and then on my own machines when I got started with Linux in 1994. I flamed against Emacs on Usenet, called it all the epithets (Escape Meta Alt Ctrl Delete, Eight Megs And Constantly Swapping (8 megs was a lot then), Eventually Mangles All Computer Storage)… I couldn’t stand the chord keys and lack of modality. Even once I got heavily into Lisp I still tried to stick with Vim, or tried LightTable, or Atom, or SublimeText. But then one day I hit a wall and Emacs (plus cider-mode and slime and a few other packages) was the obvious solution. Now I’m out there evangelizing Emacs (I’m writing this post in the Markdown major mode plus some helpful minor modes) and I figure I’d offer some advice for those looking to convert to the Church of Emacs. Primarily, this post is inspired by a request I received on Twitter: Instead of just compiling some links in a gist, I figured it was worthy of a blog post, so my seniors in the Church of Emacs can tell me where I’m wrong in the comments. But this is based on my experience converting from Vim to Emacs, so I’ll explain what worked for me. ### Emacs Prelude Prelude is really a great way to hit the ground running. It provides a wealth of sensible default packages, fixes the color scheme, and configures your .emacs.d config directory in a way that makes it easy to configure without breaking shit. The install instructions are here and I highly recommend it. UPDATE: I forgot something vitally important about prelude. Prelude comes with guru-mode enabled by default, which disables your arrow keys and prods you to use Emacs default navigation commands instead (i.e. C-p for up, C-n for down, C-b for left, C-f for right). These commands are worth knowing, but I felt like I was being trolled when my arrow keys just told me what chord combination to use instead. (As an aside, Thoughtbot’s dotfiles do the same thing with vim). So you have two options: one is to M-x guru-mode to toggle it every session. The more permanent solution is to add the following to your config (if you’re using Prelude, it should go in ~/.emacs.d/personal/preload/user.el): (setq prelude-guru nil) Just my personal preference, but something I found really annoying when I got started. As far as all those useful navigation and editing commands, emacs (naturally) has a built-in tutorial accessible from M-x help-with-tutorial or just C-h t. UPDATE TO THE UPDATE: Bozhidar Batsov (the author of Prelude) pointed out in this comment that the current default behavior is to warn when arrow keys are used, not to disable them. I hadn’t noticed the change, which came in with this commit. You can find the configuration options for guru-mode in the README here. ### Emacs for Mac OS X I really like using the packaged app version of Emacs available from http://emacsformacosx.com/. It works great with Prelude, and doesn’t include the cruft that Aquamacs tacks on to make it more Mac-ish. You get a nice packaged Emacs.app that follows OS X conventions, but is really just straight GNU Emacs. ### evil-mode So, this is a touchy subject for me. When I first switched I used evil-mode to get my familiar Vim keybindings in emacs, but I actually found it made it more difficult to dive into emacs. Evil-mode is actually impressively complete when it comes to imposing vim functionality over top of emacs, but there are still times when you needto hit C-x k or M-x something-mode and the cognitive dissonance of switching between them was just overwhelming. So I’d forego evil-mode and just keep Emacs Wiki open in your browser for the first few days. It doesn’t take that long to dive in head-first. ### Projectile It ships with Prelude, so not a major headline, but it does help to keep your projects organized and navigate files. ## On Lisp Since this is really about Clojure development environments, I might as well dive into the inherent Lispiness of emacs. The extension language is a Lisp dialect, and very easy to learn and use. Emacs is so extensible that one of the running jokes is that it’s a great operating system in need of a decent text editor. I’ll get to that later. ### cider-mode Interacting with Clojure is amazing with cider. You get an in-editor REPL, inline code evaluation, documentation lookup, a scratch buffer for arbitrary code evaluation, and a dozen other features. LightTable is nice with its InstaRepl but emacs/cider is the real deal. You cannot wish for a better Clojure dev environment… and the community agrees: cider-jack-in connects to a lein repl :headless instance, and cider-mode gives you inline evaluation in any Clojure file. It’s amazing. ### paredit and smartparens Ever have trouble keeping your parens balanced? You’re covered. paredit is the classic solution, but a lot of folks are using smartparens instead… I’ve been using smartparens in strict mode and it’s made me a lot more disciplined about how I place my forms. ## Other Languages I’ve been using Emacs for Ruby, Javascript, Haskell, C++, and so on, and it’s been great. The only time I launch another app is when I have to deal with Java, because IntelliJ/Android Studio make life so much easier. But most of that is all the ridiculous build ceremony for Java, so that’s neither here nor there. ## EmacsOS That joke about Emacs being an operating system? Not such a joke. My favorite Twitter client right now is Emacs twittering-mode. There’s Gnus for Usenet and Email, and Emacs 24.4 just came out with an improved in-editor web browser called eww. Emacs is a deep, deep rabbit hole. The only way in is head first. But there’s so much you can do in here, and it’s a staggeringly powerful environment. Welcome to the dark side. We have macros. ### Planet Theory #### Btrim: A fast, lightweight adapter and quality trimming program for next-generation sequencing technologies Authors: Yong Kong Download: PDF Abstract: Btrim is a fast and lightweight software to trim adapters and low quality regions in reads from ultra high-throughput next-generation sequencing machines. It also can reliably identify barcodes and assign the reads to the original samples. Based on a modified Myers's bit-vector dynamic programming algorithm, Btrim can handle indels in adapters and barcodes. It removes low quality regions and trims off adapters at both or either end of the reads. A typical trimming of 30M reads with two sets of adapter pairs can be done in about a minute with a small memory footprint. Btrim is a versatile stand-alone tool that can be used as the first step in virtually all next-generation sequence analysis pipelines. The program is available at \url{this http URL}. #### Quantum algorithms for shortest paths problems in structured instances Authors: Aran Nayebi, Virginia Vassilevska Williams Download: PDF Abstract: We consider the quantum time complexity of the all pairs shortest paths (APSP) problem and some of its variants. The trivial classical algorithm for APSP and most all pairs path problems runs in O(n^3) time, while the trivial algorithm in the quantum setting runs in \tilde{O}(n^{2.5}) time, using Grover search. A major open problem in classical algorithms is to obtain a truly subcubic time algorithm for APSP, i.e. an algorithm running in O(n^{3-\varepsilon}) time for constant \varepsilon>0. To approach this problem, many truly subcubic time classical algorithms have been devised for APSP and its variants for structured inputs. Some examples of such problems are APSP in geometrically weighted graphs, graphs with small integer edge weights or a small number of weights incident to each vertex, and the all pairs earliest arrivals problem. In this paper we revisit these problems in the quantum setting and obtain the first nontrivial (i.e. O(n^{2.5-\varepsilon}) time) quantum algorithms for the problems. #### Tight tradeoffs for approximating palindromes in streams Authors: Pawel Gawrychowski, Przemyslaw Uznanski Download: PDF Abstract: We consider the question of finding the longest palindrome in a text of length n in the streaming model, where the characters arrive one-by-one, and we cannot go back and retrieve a previously seen character. While computing the answer exactly using sublinear memory is not possible in such a setting, one can still hope for a good approximation guarantee. We focus on the two most natural variants, where we aim for either an additive or a multiplicative approximation of the length of the longest palindrome. We first show, that there is no point in considering either deterministic or Las Vegas algorithms in such a setting, as they cannot achieve sublinear space complexity. For Monte Carlo algorithms, we provide a lowerbound of \Omega(\frac{n}{E}) bits for approximating the answer with an additive error E, and \Omega(\frac{\log n}{\varepsilon}) bits for approximating the answer within a multiplicative factor of (1+\varepsilon). Then we construct a generic Monte Carlo algorithm, which by choosing the parameters appropriately achieves space complexity matching up to a logarithmic factor for both variants. This substantially improves the previous results [Berenbrink et al., STACS 2014] and essentially settles the space complexity of the problem. #### Permutation Reconstruction from Differences Authors: Marzio De Biasi Download: PDF Abstract: We prove that the problem of reconstructing a permutation \pi_1,\dotsc,\pi_n of the integers [1\dotso n] given the absolute differences |\pi_{i+1}-\pi_i|, i = 1,\dotsc,n-1 is NP-complete. As an intermediate step we first prove the NP-completeness of the decision version of a new puzzle game that we call Crazy Frog Puzzle. The permutation reconstruction from differences is one of the simplest combinatorial problems that have been proved to be computationally intractable. #### A type assignment for lambda-calculus complete both for FPTIME and strong normalization Authors: Erika De Benedetti, Simona Ronchi Della Rocca Download: PDF Abstract: One of the aims of Implicit Computational Complexity is the design of programming languages with bounded computational complexity; indeed, guaranteeing and certifying a limited resources usage is of central importance for various aspects of computer science. One of the more promising approaches to this aim is based on the use of lambda-calculus as paradigmatic programming language and the design of type assignment systems for lambda-terms, where types guarantee both the functional correctness and the complexity bound. Here we propose a system of stratified types, inspired by intersection types, where intersection is a non-associative operator. The system, called STR, is correct and complete for polynomial time computations; moreover, all the strongly normalizing terms are typed in it, thus increasing the typing power with respect to the previous proposals. Moreover, STR enjoys a stronger expressivity with respect to the previous system STA, since it allows to type a restricted version of iteration. ### Lobsters #### Tug: Making development easier with Docker ### /r/netsec #### The Case of the Modified Binaries ### CompsciOverflow #### Counting the number of words accepted by an acyclic NFA Let M be an acyclic NFA. Since M is acyclic, L(M) is finite. Can we compute |L(M)| in polynomial time? If not, can we approximate it? Note that the number of words is not the same as the number of accepting paths in M, which is easily computable. Let me mention one obvious approach that doesn't work: convert the NFA to a DFA (which will also be acyclic), then count the number of accepting paths in the DFA. This doesn't result in a polynomial-time algorithm, since the conversion can cause an exponential blowup in the size of the DFA. ### HN Daily #### Daily Hacker News for 2014-10-23 The 10 highest-rated articles on Hacker News on October 23, 2014 which have not appeared on any previous Hacker News Daily are: ### Planet Theory #### On the Average-case Complexity of Parameterized Clique Authors: Nikolaos Fountoulakis, Tobias Friedrich, Danny Hermelin Download: PDF Abstract: The k-Clique problem is a fundamental combinatorial problem that plays a prominent role in classical as well as in parameterized complexity theory. It is among the most well-known NP-complete and W[1]-complete problems. Moreover, its average-case complexity analysis has created a long thread of research already since the 1970s. Here, we continue this line of research by studying the dependence of the average-case complexity of the k-Clique problem on the parameter k. To this end, we define two natural parameterized analogs of efficient average-case algorithms. We then show that k-Clique admits both analogues for Erd\H{o}s-R\'{e}nyi random graphs of arbitrary density. We also show that k-Clique is unlikely to admit neither of these analogs for some specific computable input distribution. ## October 23, 2014 ### CompsciOverflow #### How many comparisons in the worst case, does it take to merge 3 sorted lists of size n/3? How many comparisons in the worst case, does it take to merge 3 sorted lists of size n/3? (where n is a power of 3) I was told it takes:$$2(n-2) + 1 = 2n-3$$However, I can't seem to figure out why. The way to merge them I was thinking just to merge two of the lists, and then merge that big 2/3 list with the remaining list. How come the worst case of that 2n-3? The complete explanation I was given was: The worst case occurs if the first list empties when there is exactly 1 item in each of the other two. Prior to this, each of the other n−2 numbers requires 2 comparisons before going into the big list. After this, we only need 1 more comparison between the 2 leftover items. Which doesn't make complete sense to me. Not sure if its just the grammar of the sentences, but not sure where the 2(n-2) came from... What does: The worst case occurs if the first list empties when there is exactly 1 item in each of the other two. even mean? When it says "prior to this", its not clear to me what exactly happened before hand... What does the "big list" referring to? How did we even get a "big list"? Btw, I am not looking for an asymptotic answer. I was also interested in the generalization of my question though: Extending my question, if we extend merge sort algorithm but instead of 2, to divide by some constant c, why would the recurrence be of the form:$$T(n) = cT \left( \frac{n}{c} \right) + \left[ (c-1)(n-(c-1)) + \sum^{c-2}_{i=1} i\right]$$The extra term for merging is not entirely clear to me. ### StackOverflow #### Scala Play 2.3.5 - Coveralls sbt plugin java.io.IOException: Unable to download JavaScript I am currently trying to setup a play scala project buildchain with travis, heroku and coveralls sbt plugin for codecoverage. I have created a clean scala play app with the activator and just added the coveralls plugin and a travis.yml. When I push my project and trigger the build I get the following exception while travis runs the tests: [error] c.g.h.h.HtmlPage - Error loading JavaScript from [http://localhost:19001/assets/javascripts/hello.js]. java.io.IOException: Unable to download JavaScript from 'http://localhost:19001/assets/javascripts/hello.js' (status 404). at com.gargoylesoftware.htmlunit.html.HtmlPage.loadJavaScriptFromUrl(HtmlPage.java:1106) ~[htmlunit-2.13.jar:2.13] at com.gargoylesoftware.htmlunit.html.HtmlPage.loadExternalJavaScriptFile(HtmlPage.java:1039) ~[htmlunit-2.13.jar:2.13] at com.gargoylesoftware.htmlunit.html.HtmlScript.executeScriptIfNeeded(HtmlScript.java:409) [htmlunit-2.13.jar:2.13] at com.gargoylesoftware.htmlunit.html.HtmlScript3.execute(HtmlScript.java:266) [htmlunit-2.13.jar:2.13] at com.gargoylesoftware.htmlunit.html.HtmlScript.onAllChildrenAddedToPage(HtmlScript.java:286) [htmlunit-2.13.jar:2.13]  I have found this old topic (https://groups.google.com/forum/#!topic/play-framework/yj4NT3BO0Os) with the same errormessage but unfortunately none of the solutions there worked for me. Does anyone here use coveralls or know a solution for my problem? I ve attached all configuration files. build.sbt import scoverage.ScoverageSbtPlugin.instrumentSettings import org.scoverage.coveralls.CoverallsPlugin.coverallsSettings name := """buildchain""" version := "1.0-SNAPSHOT" scalaVersion := "2.11.1" lazy val root = (project in file(".")).enablePlugins(PlayScala) libraryDependencies ++= Seq( jdbc, anorm, cache, ws ) instrumentSettings CoverallsPlugin.coverallsSettings ScoverageKeys.minimumCoverage := 1 ScoverageKeys.failOnMinimumCoverage := true  plugins.sbt: resolvers += "Typesafe repository" at "http://repo.typesafe.com/typesafe/releases/" resolvers += Classpaths.sbtPluginReleases // The Play plugin addSbtPlugin("com.typesafe.play" % "sbt-plugin" % "2.3.5") // web plugins addSbtPlugin("com.typesafe.sbt" % "sbt-coffeescript" % "1.0.0") addSbtPlugin("com.typesafe.sbt" % "sbt-less" % "1.0.0") addSbtPlugin("com.typesafe.sbt" % "sbt-jshint" % "1.0.1") addSbtPlugin("com.typesafe.sbt" % "sbt-rjs" % "1.0.1") addSbtPlugin("com.typesafe.sbt" % "sbt-digest" % "1.0.0") addSbtPlugin("com.typesafe.sbt" % "sbt-mocha" % "1.0.0") // code coverage addSbtPlugin("org.scoverage" % "sbt-scoverage" % "0.99.7.1") addSbtPlugin("org.scoverage" %% "sbt-coveralls" % "0.99.0")  travis.yml language: scala scala: - 2.11.2 script: "sbt coveralls" notifications: email: false  ### Lobsters #### Anatomy of a code tracer ### StackOverflow #### Python: Understanding reduce()'s 'initializer' argument I'm relatively new to Python and am having trouble with Folds or more specifically, reduce()'s 'initializer' argument e.g. reduce(function, iterable[, initializer]) Here is the function... >>> def x100y(x,y): ... return x*100+y Could someone explain why reduce() produces 44... >>> reduce(x100y, (), 44) 44 or why it produces 30102 here... >>> reduce(x100y, [1,2], 3) 30102 ### Planet Clojure #### Clojure Data Science: Sent Counts and Aggregates This is Part 3 of a series of blog posts called Clojure Data Science. Check out the previous post if you missed it. For this post, we want to generate some summaries of our data by doing aggregate queries. We won’t yet be pulling in tools like Apache Storm into the mix, since we can accomplish this through Datomic queries. We will also talk about trade-offs of running aggregate queries on large datasets and devise a way to save our data back to Datomic. ## Updating dependencies It has been some time since we worked on autodjinn. Libraries move fast in the Clojure ecosystem, and we want to make sure that we’re developing against the most recent versions of each dependency. Before we begin making changes, let’s update everything. If you have already read my [Clojure Code Quality Tools]/blog/2014/09/15/clojure-code-quality-tools/) post, you’ll be familiar with the lein ancient plugin. Below is output when I run lein ancient on the last post’s finished git tag, v0.1.1. To go back to that state, you can run git checkout v0.1.1 on the autodjinn repo. It looks like our nomad dependency is out of date. Update the version number in project.clj to 0.7.0 and run lein ancient again to verify that it worked. If you take a look at project.clj yourself, you may notice that our project is still on Clojure 1.5.1. lein ancient doesn’t look at the version of Clojure that we’re specifying; it assumes you have a good reason for picking the Clojure version you specify. In our case, we’d like to be on the latest stable Clojure, version 1.6.0. Update the version of Clojure in project.clj and then run your REPL. There should be no issues with using the functionality in the app that we created in previous posts. If there is, carefully read the error messages and try to find a solution before moving on. To save on the hassle of upgrading, I have created a tag for the project after upgrading Clojure and nomad. To go to that tag in your local copy of the repo, run git checkout v0.1.2. ## Datomic query refresher If you remember back to the first post, we wrapped up by querying for entity IDs and then using Datomic’s built-in entity and touch functions to instantiate each message with all of its attributes. We had to do this because the query itself only returned a set of entity IDs: Note that the Datomic query is made up of several parts: • The :find clause says what will be returned. In this case, it is the ?eid variable for each record we matched in the rest of the query. • The :where clause gives a condition to match. In this case, we want all ?eid where the entity has a :mail/uid fact, but we don’t care about the :mail/uid fact’s value, so we give it a wildcard with the underscore (_). We could pass in the :mail/uid we care about, and only get one message’s entity-ID back. Notice how the ?uid variable gets passed in with the :in clause, as the third argument to d/q? Or we could change the query to match on other attributes: In all these cases, we’d still get the entity IDs back because the :find clause tells Datomic to return ?eid. Typically, we pass around entity IDs and lazy-load any facts (attributes) that we need off that entity. But, we could just as easily return other attributes from an entity as part of a query. Let’s ask for the recipients of all the emails in our system: While it is less common to return only the value of an entity’s attribute, being able to do so will allow us to build more functionality on top of our email abstraction later. One last thing. Take a look at the return of that query above. Remember that the results returned by a Datomic query are a set. In Clojure, sets are a collection of unique values. So we’re seeing the unique list of addresses that are in the To: field in our data. What we’re not seeing is duplicate recipient addresses. To be able to count the number of times an email address received a message, we’ll need a list with non-unique members. Datomic creates a unique set for the values returned by a query. This is generally a great thing, since it gets around some of the issues that one can run into with JOINing in SQL. But in this case, it is not ideal for what we want to accomplish. We could try to get around the uniqueness constraint on output by returning vectors of the entity ID and the ?to address, and then mapping across the result to pull out the second item: There’s a simpler way that we can use in the Datomic query. By keeping it inside Datomic, we can later combine this approach with more-complex queries. We can tell the Datomic query to look at other attributes when considering what the unique key is by passing the query a :with clause. By changing our query slightly to include a :with clause, we end up with the full list of recipients in our datastore: At this point, it might be a good idea to review Datomic’s querying guide. We’ll be using some of the advanced querying features found in the later sections of that guide, most notably aggregate functions. ## Sent Counts For this feature, we want to find all the pairs of from-to addresses for each email in our datastore, and then sum up the counts for each pair. We will save all these sent counts into a new entity type in Datomic. This will allow us to ask Datomic questions like who sends you the most email, and who you send the most email to. We start by building up the query in our REPL. Let’s start with a simpler query, to count how many emails have been sent to each email address in our data store. Note that this isn’t sufficient to answer the question above, since we won’t know who those emails came from; they could have been sent by us or by someone else, or they could have been sent to us. Later, we’ll make it work with from-to pairs that allow us to know things like who is sending email to us. A simple way to do this would be to wrap our previous query in the frequencies function that Clojure.core provides. frequencies returns a map of items with their count from a Clojure collection. However, we want to perform the same sort of thing in Datomic itself. To do that, we’re going to need to know about aggregate functions. Aggregate functions operate over the intermediate results of a Datomic query. Datomic provides functions like max, min, sum, count, rand (for getting a random value out of the query results), and more. With aggregates, we need to be sure to use a :with clause to ensure we aggregate over all our values. Looking at that short list of aggregate functions I’ve named, we can see that we probably want to use the count function to count the occurance of each email address in a to field in our data. To see how aggregates work, I’ve come up with a simpler example (the only new thing to know is that Datomic’s Datalog implementation can query across Clojure collections as easily as it can against a database value, so I’ve given a simple vector-of-vectors here to describe data in the form [database-id person-name] When the query looks at records in the data, our :where clause gives each position in the vector an id and a name based on position in the vector.) Let’s review what happened there. Before the count aggregate function was applied, our results looked like this: [["Jon"] ["Jon"] ["Bob"] ["Chris"]] So the count function just counts across the values of the variable it is passed (in our case, ?name), and by pairing it with the original ?name value, we get each name and the number of times it appears in our dataset. It makes sense that we can do the same thing with our recipient email addresses from the previous query. Combining our previous queries with the count aggregate function, we get: That looks like the same kind of data we were getting with the use of the frequencies function before! So now we know how to use a Datomic aggregate function to count results in our queries. What’s next? Well, what we really want is to get results that are of the form [from-address to-address] and count those tuples. That way, we can differentiate between email sent to us versus email we’ve sent to others, etc. And eventually, we’d like to save those queries off as functions that we can call to compute the counts from other places in our project. We can’t pass a tuple like [from-address to-address] to the count aggregate function in one query. The way around this is to write two queries. The inner query will return the tuples, and the outer query will return the tuple and a count of the tuple in the output data. Since the queries run on the peer, we don’t really have to worry about whether it is one query or two, just that it returns the correct data at the end. So what would the inner query look like? Remember that the outer query will still need a field to pass to the :with clause, so we’ll probably want to pass through the entity ID. Those tuples will be used by our outer query. However, we also need a combined value for the count to operate on. For that, we can throw in a function call in the :where clause and give it a binding at the end for Datomic to use for that new value. In this case, I’ll combine the ?from and ?to values into a PersistentVector that the count aggregate function can use. The combined query ends up looking like this: And the output is as we expect. ## Reusable functions The next step is to turn the query above into various functions we can use to query for from-to counts later. In our data, we don’t just have recipients in the To: field, we also have CC and BCC recipients. Those fields will need their own variations of the query function, but since they will share so much functionality, we will try to compose our functions in such a way that we avoid duplicate code. In general, when I write query functions for Datomic, I use multiple arities to always allow a database value to be passed to the query function. This can be useful, for example, when we want to query against previous (historical) values of the database, or when we want to work with a particular database value across multiple queries, to ensure our data is consistent and doesn’t change between queries. Such a query function typically looks like this: By taking advantage of multiple arities, we can default to not having to pass a database value into the function. But in the cases where we do need to ensure a particular database version is used, we can do that. This is a very powerful idiom that I’ve learned since I began to use Datomic, and I suggest you structure all your query functions similarly. Now, let’s take that function that only queries for :mail/to addresses and make it more generic, with specific wrapper functions for each case where we’d want to use it: Note that we had to change the inner query to take the attr we want to query on as a variable; this is the proper way to pass a piece of data into a query we want to run. The  that comes first in the :in clause tells Datomic to use the second d/q argument as our dataset (the db value we pass in), and the ?attr tells it to bind the third d/q argument as the variable ?attr. While the three variations on functions are similar, we keep the code DRY. (DRY is an acronym for Don’t Repeat Yourself.) In the long run, less code should mean less bugs and the ability to fix problems in one place. Building complex systems by composing functions is one of the features of Clojure that I enjoy the most! And notice how we got to these finished query functions by building up functionality in our REPL: another aspect of writing systems in Clojure that I appreciate. ## Querying against large data sets Right now, our functions calculate the sent counts across all messages every time they’re called. This is fine for the small sample dataset I’ve been working with locally, but if it were to run against the 35K+ messages that are in my Gmail inbox alone (not to mention all the labels and other places my email lives…) it would take a very long time. With even bigger datasets, we can run into an additional problem: the results may not fit into memory. When building systems with datasets big enough that they don’t fit into memory, or that may take too much time to compute to be practical, there are two general approaches that we will explore. The first is storing results as data (known as memoizing or caching the results), and the other is breaking up the work to run on distributed systems like Hadoop or Apache Storm. For this data, we only want to avoid redoing the calculating every time we want to know the sent counts. Currently, the data in our system changes infrequently, and it’s likely that we could tell the system to recompute sent counts only after ingesting new data from Gmail. For these reasons, a reasonable solution will be to store the computed sent counts back into Datomic. ## A new entity type to store our results For all three query functions we wrote, each result is of the form: [from-address to-address count] Let’s add to the Datomic schema in our core.clj file to create a new :sent-count entity type with these three attributes. Note that sent counts don’t really have a unique identifier of their own; it is the combination of from -> to addresses that uniquely identifies them. However, we will leave the from and to addresses as separate fields so it is easy to use them in queries. Add the following maps to the schema-txn vector: You’ll have to call the update-schema function in your REPL to run the schema transaction. Something that’s worth calling out is that we’re using a Datomic schema valueType that we haven’t seen yet in this project: db.type/ref. In most cases, you’d want to use the ref type to associate with other entities in Datomic. But we can also use it to associate with a given list of facts. Here, we give the ref type an enum of the possible values that :sent-count/type can have: to, cc, and bcc. By adding this type field to our new entities, we can either choose to look at sent counts for only one type of address, or we can sum up all the counts for a given from-to pair and get the total counts for the system. Our next job is to add some functions to create the initial sent counts data, as well as to query for it. To keep things clean, I created a sent-counts namespace for these functions to live in. I’ve provided it below with minimal explanation, since it should look very familiar to what we’ve already done. /src/autodjinn/sent_counts.clj After adding in the sent_counts.clj file, running: (sent-counts/create-sent-counts) will populate your datastore with the sent counts computed with functions we created earlier. Note: The sent counts don’t have any sort of unique key on them, so if you run create-sent-counts multiple times, you’ll get duplicate results. We’ll handle that another time when we need to update our data. ## Wrapping up We’ve covered a lot of material on querying Datomic. In particular, we used aggregate functions to get the counts and sums of records in our data store. Because we don’t want to run the queries all the time, we created a new entity type to store our sent counts and saved our data into it. With query functions like those found in the sent-counts namespace, we can start to ask our data questions like “In the dataset, what address was sent the most email?” If you want to compare what you’ve done with my version, you can run git diff v0.1.3 on the autodjinn repo. Please let me know what you think of these posts by sending me an email at contact@mattgauger.com. I’d love to hear from you! ### /r/clojure #### CS Education Zoo #5 - David Nolen ### Portland Pattern Repository #### Submarine Patent (by 50-76-88-185-static.hfc.comcastbusiness.net 24 hours ago) ### Planet Clojure #### Some syntactic sugar for Clojure's threading macros TL;DR: threading all the things # Introduction The other day I was writing a Clojure let block to transform a map. It was a pretty usual Clojure pipeline of functions, a use case Clojure excels at. The pipeline included a cond, a couple of maps, some of my own functions, and finally an assoc and dissoc to “update” the input map with the result of the pipeline and delete some redundant keys. Even though Clojure syntax is quite spare there was quite a bit of inevitable clutter in the code and it struck me the code would be cleaner and clearer if I could use the thread first (→) macro. If you grok macros you can probably guess the rest of this post (likely you will have seen the title and probably said to yourself “Oh yeah, that’s obvious, nothing to see here” and moved along ☺ ) # Threading Macros ## Threading Macros - “thread first” and “thread last” Clojure core has a family of threading macros including the “thread first” ones of →, some→, as→, and cond→, and their equivalent thread last (-») ones. I’m not going to explain the threading macros in depth as these have been well covered already - see for example this very nice post by Debasish Ghosh (btw Debasish’s book DSLs in Action is worth your money). Simply put: the threading macros allow a pipeline of functions to be written in visually clean and clear way — pre-empting the need to write a perhaps deep inside-to-outside functional form — by weaving the result of the previous function into the current function as either the first (“thread first”) or last (“thread last”) argument. The example below of using “thread last” to sum the balances of a bank’s savings account has been taken from Debasish’s post . I have reworked his example slightly and added some narrative comments to make it completely clear what is going on:  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31  ;; Debasish's example slighty reworked ;; For original see http://debasishg.blogspot.co.uk/2010/04/thrush-in-clojure.html (def all-accounts [{:no 101 :name "debasish" :type 'savings :balance 100} {:no 102 :name "john p." :type 'checking :balance 200} {:no 103 :name "me" :type 'checking :balance -500} {:no 104 :name "you" :type 'savings :balance 750}]) (def savings-accounts-balance-sum-using-thread-last ( ;; use the thread-last macro ->> ;; ... and start from the collection of all accounts all-accounts ;; ... select only the savings accounts (filter #(= (:type %) 'savings)) ;; ... get the balances from all the saving accounts (map :balance) ;; ... and add up all their balances (apply +))) (doall (println "savings-accounts-balance-sum-using-thread-last" savings-accounts-balance-sum-using-thread-last)) ;; check the answer (assert (= 850 savings-accounts-balance-sum-using-thread-last))  ## Threading Macros - what’s not to like? Nothing really although there are limitations of course. For example to thread a map needs “thread last” and assoc requires “thread first”, but you can’t mix first and last together directly. Although “thread first” and “thread last” cover a wide range of use cases, there are times where you have to go through hoops to incorporate code that requires the current value of the pipeline as other than the first or last argument, or maybe need to use the value multiple times in multiple subforms. ## Threading Macros - using a partial There are ways around the limitations of course and one way is to use partial with “thread first” to supply the argument as the last argument to the function. This horrid example using “thread first” with lots of partials is bonkers but does demonstrate the point:  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24  ;; Using partials with thread-first (def savings-accounts-balance-sum-using-thread-first ( ;; use the thread-first macro -> ;; ... and start from the collection of all accounts all-accounts ;; ... select only the savings accounts ((partial filter #(= (:type %) 'savings))) ;; ... get the balances from all the saving accounts ((partial map :balance)) ;; ... and add up all their balances ((partial apply +)))) (doall (println "savings-accounts-balance-sum-using-thread-first" savings-accounts-balance-sum-using-thread-first)) ;; check the answer (assert (= 850 savings-accounts-balance-sum-using-thread-first))  Note each partial call is the first (and only) form inside another form; the “thread first” macro will weave the input to the partial as the second value in the outer form. (Else the macro would weave the previous result into the partial declaration itself.) ## Threading Macros - using an in-line function More generally, you can always escape the confines of the first or last constraint by using an in-line function. The following example sums the balances of all the checking accounts in deficit, applying 10% interest, to find the total owed to the bank. It uses an in-line function to apply the interest.  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41  ;; Calculate the total balance of all the checking accounts in deficit ;; applies interest to any in deficit (def deficit-accounts-balance-sum-using-interest-function ( ;; use the thread-last macro ->> ;; ... and start from the collection of all accounts all-accounts ;; ... select only the checking accounts (filter #(= (:type %) 'checking)) ;; ... select the accounts in deficit (filter #(> 0 (:balance %))) ;; add 10% interest to any in deficit ;; interest rate is first argument; second (last) is the deficit accounts ((fn [interest-rate deficit-accounts] (map (fn [deficit-account] (let [balance (:balance deficit-account) interest (* interest-rate balance)] (assoc deficit-account :balance (+ balance interest)))) deficit-accounts)) ;; interest rate is 10% 0.1) ;; ... get the balances from all the deficit accounts (map :balance) ;; ... and add up all their balances to get net balance (apply +))) (doall (println "deficit-accounts-balance-sum-using-interest-function" deficit-accounts-balance-sum-using-interest-function)) ;; check the answer (assert (= -550.0 deficit-accounts-balance-sum-using-interest-function))  Note the in-line function declaration is the first form inside another form (for the same reason as the partials above were). ## Threading Macros - capturing the result of the previous step In the above examples the steps were calls to core functions: filter, map and apply. But the step can be a call to a macro and the macro will be passed the current value of the form being evaluated by the threading macro. In the example below, a simple macro show-the-argument will print the current evaluated form and return it to continue the evaluation.  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39  ;; using a simple macro to show what's passed to each step in the pipeline (defmacro show-the-argument [argument] (doall (println argument)) (do ~argument)) (def savings-accounts-balance-sum-and-show-the-argument ( ;; use the thread-last macro ->> ;; ... and start from the collection of all accounts all-accounts ;; show the argument (show-the-argument) ;; ... select only the savings accounts (filter #(= (:type %) 'savings)) ;; show the argument (show-the-argument) ;; ... get the balances from all the saving accounts (map :balance) ;; show the argument (show-the-argument) ;; ... and add up all their balances (apply +))) (doall (println "savings-accounts-balance-sum-and-show-the-argument" savings-accounts-balance-sum-and-show-the-argument)) ;; check the answer (assert (= 850 savings-accounts-balance-sum-and-show-the-argument))  If you look at the prints, you’ll see something like the below for the output for the post filter call to show-the-argument (I’ve reformatted to aid clarity):  1 2 3  (filter (fn* [p1__1419#] (= (:type p1__1419#) (quote savings))) (show-the-argument all-accounts))  You can see how “thread last” has woven the previous call to show-the-argument (after all-accounts) into the filter form. The calls to show-the-argument will be evaluated after “thread last” has presented its evaluated form for compilation. ## Threading Macros - “thread-first-let” show-the-argument demonstrates how simple it is in a macro to grab hold of the current form being evaluated and do something with it. Let’s do that then. The macro “thread-first-let” below takes the argument together with some body forms, and returns a new form that assigns the argument to a let gensym local called x# and evaluates the body forms in the context of the let so the body forms can use x# anywhere needed. The macro includes a println to shows the form returned by “thread-first-let” to the compiler:  1 2 3 4 5 6 7 8 9 10  ;; Using the "thread-first-let" macro create a let and assigns the current form to x# ;; and evaluates the body in the let context, with x# available in the body (defmacro thread-first-let [argument & body] (let [let-form# (let [~'x# ~argument] (~@body))] (doall (println let-form#)) (do ~let-form#)))  Let’s reprise to the horrid example above where I used partials with “thread first” and use “there-first-let” instead.  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23  ;; Using "thread-first-let" inside a "thread-first" (def savings-accounts-balance-sum-using-thread-first-let ( ;; use the thread-first macro -> ;; ... and start from the collection of all accounts all-accounts ;; ... select only the savings accounts (thread-first-let filter #(= (:type %) 'savings) x#) ;; ... get the balances from all the saving accounts (thread-first-let map :balance x#) ;; ... and add up all their balances (thread-first-let apply + x#))) (doall (println "savings-accounts-balance-sum-using-thread-first-let" savings-accounts-balance-sum-using-thread-first-let)) ;; check the answer (assert (= 850 savings-accounts-balance-sum-using-thread-first-let))  One of the prints from “thread-first-let” shows the final form of the filter:  1 2  (clojure.core/let [x# all-accounts] (filter (fn* [p1__1431#] (= (:type p1__1431#) (quote savings))) x#))  The takeaway here is that after the thread-first-let the code is exactly the same as you would write outside of a core threading macro, and you have the freedom to use the argument (x#) wherever and whenever you need, not just once and / or “last”. # Final Words “thread-first-let” is a very simple seven line macro allowing arbitrary code to participate in the core threading macros making the whole pipeline as clean as clear as possible. It not hard to see how this simple idea could taken forward to define macros to support “thread-last”, or even “packaged” macros such as thread-first-map. The more general point is how even a trivial use of macros makes for a welcome improvement in keeping the code clean and clear, and really does bring home how tractable and malleable macros make Clojure. ### /r/compsci #### Question: Do applications like Permanent Eraser help avoid hard drive fragmentation? Personally, that would be the only reason I would have the patience to use it. Thank you. Also, sorry if this is the wrong place to ask. Just really would appreciate an answer who knows what they're talking about. submitted by _apprentice_ [link] [1 comment] ### TheoryOverflow #### Bias of a random boolean low degree polynomial What is the bias of a random Boolean function that can be represented as a low degree polynomial over the reals, i.e. has low Fourier degree? More specifically, is it true that if we take a uniformly random function f:\{0,1\}^n \to \{0,1\} among those that can be represented as a real polynomial of degree \leq d, then \mathbb{E}[f] will be close to 0.5 with high probability? Remark 1: Alternatively, it also makes sense to consider the following distribution: when choosing a random function of degree d, identify functions that are equivalent up to renaming coordinates, so a random function is in fact a random equivalence class. Remark 2: This question is somewhat related: Random functions of low degree as a real polynomial. ### StackOverflow #### Transform a Collection of scalaz disjunctions into a single disjunction Given the following method: def foo(seq: Seq[Long]) : Seq[\/[String, Long]] = seq map { v => for { bar <- returnsOptionLong1(v) \/> "first was None" baz <- returnsOptionLong2(bar) \/> "second was None" } yield baz }  I want to implement the following method: def qux(initial: Seq[\/[String, Long]]) : \/[String, Seq[Long]] = { // ... Fill-in implementation here ... }  In other words: how does one use scalaz to transform a sequence of disjunctions into a disjunction with the right-side being a sequence. Note: If a cleaner implementation would involve making changes to foo as well (e.g. modifications involving changing map to flatMap), please include those as well. ### QuantOverflow #### Success of trendlines using dividend-adjusted vs un-adjusted data I'm curious whether anybody has any experience with using trend lines drawn using data which is vs isn't adjusted for dividends. For periods of sideways trading that give roughly horizontal trendlines, the adjusted would obviously make more sense, assuming the adjustment is made during one of these periods. For applications of machine learning and most indicators I can think of, adjusted prices should be used to remove the systematic moves. But I'm uncertain about using adjusted prices for trend lines, particularly sloping ones. Philosophically, it depends on what drives sloping trend lines. If they are driven by the psychology of human traders, then I believe it is anchoring of the psyche on different raw numbers, in which case raw un-adjusted data should be used. But if the trending equity is doing so based on calculated bottom lines by machine (or human), then I would think adjusted data should be used. My qualitative experience is that I previously used un-adjusted data, but now use adjusted data, and I think there was more "coincidence" between price movements and the support/resistance levels determined by trend lines drawn on un-adjusted data. Does anybody have any specific experience on this to offer? ### /r/compsci #### How to present a computer algebra system (CAS) ? I have to make a presentation on a computer algebra system as part of my college course on Symbolic computation. Could you advice me on the outlines I should follow. I am thinking about the wolfram alpha online site. submitted by cudoer [link] [1 comment] ### StackOverflow #### using core.async with blocking clients/drivers : are there performance benefits? I'm programming a web application backend in Clojure, using among other things: • http-kit as an HTTP server and client (nonblocking) • monger as my DB driver (blocking) • clj-aws-s3 as an S3 client (blocking) I am aware of the performance benefits of event-driven, non-blocking stacks like the ones you find on NodeJS and the Play Framework (this question helped me), and how it yields a much better load capacity. For that reason, I'm considering making my backend asynchronous using core.async. My question is : Can you recreate the performance benefits of non-blocking web stacks by using core.async on top of blocking client/driver libraries? Elaborating: What I'm currently doing are the usual synchronous calls : (defn handle-my-request [req] (let [data1 (db/findData1) data2 (db/findData2) data3 (s3/findData3) result (make-something-of data1 data2 data3)] (ring.util.response/response result)) )  What I plan to do is wrapping any call involving IO in a thread block, and synchronize this inside a go block, (defn handle-my-request! [req resp-chan] ;; resp-chan is a core.async channel through which the response must be pushed (go (let [data1-ch (thread (db/findData1)) ;; spin of threads to fetch the data (involves IO) data2-ch (thread (db/findData2)) data3-ch (thread (s3/findData3)) result (make-something-of (<! data1-ch) (<! data2-ch) (<! data3-ch))] ;; synchronize (->> (ring.util.response/response result) (>! resp-chan)) ;; send response )))  Is there a point doing it that way? I'm doing this because that's kind of the best practices I found, but their performance benefits are still a mystery to me. I thought the issue with synchronous stacks was that they use one thread per request. Now it seems they use more than one. Thanks in advance for your help, have a beautiful day. #### configure run in eclipse for Scala I am a beginner in Scala. I installed Scala IDE in eclipse and now I want to run my application programme. It never shows "run as Scala application", instead it shows "run as Java application" or "Java applet" I opened "run configuration" and clicked on "Scala application" and my project name is "test" and second column is of "Class Main". What do I have to fill in? I filled in "Main.Scala", but it states "could not find mainmain class main.scala". Can you help me with running this project? #### Play 2 stop/start causes compiler to enter infinte loop? I'm currently trying to test a couple things with Play! and fix some issues with the ClassLoader conflicting with a plugin. But I discovered something interesting. If I modify the beforeStart method in Global... /** * override pre start behavior */ override def beforeStart(app: Application): Unit = { Play.stop() Play.start(app) }  ...and suddenly the compiler enters an infinite loop and never completes. In the logs all I see is this, even after hitting the host with HTTP: --- (Running the application from SBT, auto-reloading is enabled) --- [info] play - Listening for HTTP on /0:0:0:0:0:0:0:0:9000 (Server started, use Ctrl+D to stop and go back to the console...)  Is this supposed to be happening because I'm calling Play.start and it always calls before start? Is this a bug? #### Making Generics in Shapeless 2.0.0 I'd like to do the following with Shapeless 2.0.0:  def freeze[T]( o:T ) = { val gen = Generic[T] gen.to(o) }  This gives me an error saying that T is not a case class or trait. Is there any way to do this? (I can do val gen = Generic[Foo]. That's fine, but what if I need to be able to build a Generic from something not known at compile-time?) ### QuantOverflow #### What is the formula for beta weighted delta and gamma? I am trying to calculate the beta weighted delta and gamma for a portfolio of options of different underlying stocks, but I can't seem to find the correct formula. Can someone point me to it or a book that contains it? ### CompsciOverflow #### High Level Explanation of the Pumping Lemma I have a problem that I cannot figure out regarding using the pumping lemma to prove that a language is not regular. I don't understand how I go about proving through contradiction that the language is not regular. When I read about the pumping lemma, all I am finding are complicated explanations involving x, y, z and m which seems difficult to understand. I would appreciate if someone could provide a high level overview of the pumping lemma, and possible an example of proving a language is not regular with its use. ### StackOverflow #### Ansible MySQL User with REQUIRE SSL I've just begun learning Ansible today, and I'm already making fast progress and on the edge of being able to automate our whole IT stack. That's nice! :) I've however hit a roadblock. We've chosen to take the small performance hit and encrypt ALL MySQL connections using the SSL feature. This is to let our office IP's remotely manage it, and also inter-datacenter. Using the mysql_user module, I can make sure an user is added, and set the password and so forth. But I can't seem to find anyway to require SSL on the user? According to a quick Google, and the lack of options in the documentation, I guess I can't do it with mysql_user. But the real question is: Do you know a (preferably clean) work around? If I could somehow execute raw queries with Ansible it would be perfect. To be specific, I need to replicate this SQL in Ansible, however possible: GRANT ALL PRIVILEGES ON *.* TO ‘ssluser’@’%’ IDENTIFIED BY ‘pass’ REQUIRE SSL;  #### Azure EventHubs consumed by Scala client issues We are trying to consume Microsoft Azure EventHubs messages from a Scala client. We have based our spike on this sample http://azure.microsoft.com/en-us/documentation/articles/service-bus-java-how-to-use-jms-api-amqp/ So far we have been able to connect and receive the messages for a partition inside a consumer group with the following path: <eventHubName>/ConsumerGroups/<consumerGroupName>/Partitions/<numberOfPartition> The main code is this: import javax.jms._ import javax.naming.Context import javax.naming.InitialContext import java.io.BufferedReader import java.io.InputStreamReader import java.util.Hashtable object program extends App { override def main(args: Array[String]) { try { val simpleSenderReceiver:SimpleSenderReceiver = new SimpleSenderReceiver(); System.out.println("Press [enter] to send a message. Type 'exit' + [enter] to quit."); var commandLine: BufferedReader = new java.io.BufferedReader(new InputStreamReader(System.in)); while (true) { var s:String = commandLine.readLine(); if (s.equalsIgnoreCase("exit")) { simpleSenderReceiver.close(); System.exit(0); } } } catch { case e: Exception => e.printStackTrace(); } } class SimpleSenderReceiver extends MessageListener { var connection: TopicConnection = null; var receiveSession : TopicSession = null; var receiver: TopicSubscriber = null; try { // Configure JNDI environment var env: Hashtable[String, String] = new Hashtable[String, String](); env.put(Context.INITIAL_CONTEXT_FACTORY, "org.apache.qpid.amqp_1_0.jms.jndi.PropertiesFileInitialContextFactory"); env.put(Context.PROVIDER_URL, "servicebus.properties"); var context: Context = new InitialContext(env); // Lookup ConnectionFactory and Queue var cf: TopicConnectionFactory = context.lookup("SBCF").asInstanceOf[TopicConnectionFactory]; var topic:Topic = context.lookup("default").asInstanceOf[Topic]; // Create Connection connection = cf.createTopicConnection(); // Create receiver-side Session, MessageConsumer,and MessageListener receiveSession = connection.createTopicSession(false, Session.CLIENT_ACKNOWLEDGE); receiver = receiveSession.createSubscriber(topic); receiver.setMessageListener(this); connection.start(); } catch{ case jms : JMSException => println(jms.getErrorCode() + " " + jms.printStackTrace()); case e : Exception => println(e.getMessage()); } def close(){ connection.close(); } def onMessage(message: Message) { try { var bytesMessage : BytesMessage = message.asInstanceOf[BytesMessage]; var bodyArray : Array[Byte] = new Array[Byte](bytesMessage.getBodyLength().asInstanceOf[Int]); bytesMessage.readBytes(bodyArray) def text : String = new String(bodyArray, "UTF-8" ) System.out.println("Received message = " + text); message.acknowledge(); } catch { case e : Exception => e.printStackTrace(); } } } }  We have two questions A) Every time we run this code we receive the entire batch of messages stored in the partition. How can we define from which offset start receiving messages? And how can we know the offset of the last message received? B) If we leave this code listening for messages after the first batch, no new messages are received and after 10 minutes, we get this exception: java.net.SocketTimeoutException: Read timed out at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(Unknown Source) at java.net.SocketInputStream.read(Unknown Source) at sun.security.ssl.InputRecord.readFully(Unknown Source) at sun.security.ssl.InputRecord.read(Unknown Source) at sun.security.ssl.SSLSocketImpl.readRecord(Unknown Source) at sun.security.ssl.SSLSocketImpl.readDataRecord(Unknown Source) at sun.security.ssl.AppInputStream.read(Unknown Source) at java.io.InputStream.read(Unknown Source) at org.apache.qpid.amqp_1_0.client.Connection.doRead(Connection.java:370) at org.apache.qpid.amqp_1_0.client.Connection.access000(Connection.java:42) at org.apache.qpid.amqp_1_0.client.Connection2.run(Connection.java:244) at java.lang.Thread.run(Unknown Source)  How can we make the client to wait for new messages indefinitely? Thanks in advance. Fer. ### DataTau #### BID Data Project ### StackOverflow #### Array of values in FluentD How to create array of values in the event record with FluentD? I have parsed latitude and longitude from the log. How to transform these values to an array ? Ex. I have a log like 2014-9-23T09:27:28.345 1411464370345 -37.0081,174.792 BBC SEARCH be03debe-b0af-4939-9abc-7c0ad25bb114 DEPARTURE 16 576.00 ROLLBACK I have parsed latitude=-37.0081 and longitude =174.792. How to form a JSON object like this ? {location:[-37.0081,174.792]} And how to parse the string value to data types in the event record ? Like integer /float / double ### /r/netsec #### Live stream of Peter "Mudge" Zatko, member of the high profile hacker think tank the L0pht, discussing his time as program manager of DARPA. Scheduled for 6-8pm ET. ### StackOverflow #### How can scala-js integrate with sbt-web? I would like to use scala-js with sbt-web in such a way that it can be compiled to produce javascript assets that are added to the asset pipeline (e.g. gzip, digest). I am aware of lihaoyi's workbench project but I do not believe this affects the asset pipeline. How can these two projects be integrated as an sbt-web plugin? #### Setting a third-party plugin setting in sbt AutoPlugin I have an AutoPlugin which aggregates several third-party plugins and customizes their settings for our company. For most of the plugins, this works just fine by putting them in the projectSettings: override lazy val projectSettings = Seq( somePluginSetting := "whatever" )  I tried to do this for ScalaStyle as well: import org.scalastyle.sbt.ScalastylePlugin.scalastyleConfigUrl override lazy val projectSettings = Seq( scalastyleConfigUrl := Some(url("http://git.repo/scalastyle-config.xml")) )  This setting is never visible in projects using my plugin, instead sbt uses the plugin-provided default value: > inspect scalastyleConfigUrl [info] Setting: scala.Option[java.net.URL] = None [info] Description: [info] Scalastyle configuration file as a URL [info] Provided by: [info] {file:/Users/kaeser/Documents/workspace/ci-test-project/}root/*:scalastyleConfigUrl [info] Defined at: [info] (org.scalastyle.sbt.ScalastylePlugin) Plugin.scala:101 [info] Delegates: [info] *:scalastyleConfigUrl [info] {.}/*:scalastyleConfigUrl [info] */*:scalastyleConfigUrl [info] Related: [info] test:scalastyleConfigUrl  When I put the setting into build.sbt directly, it works as expected. I made a simple example sbt plugin that shows the problem: https://github.com/jastice/sbt-customsettings What might the issue be? #### ScalaZ: what is "type Tagged[T] = {type Tag = T}"? I started to read scalaz's source code. package object scalaz { import Id._ implicit val idInstance: Traverse1[Id] with Each[Id] with Monad[Id] with Comonad[Id] with Distributive[Id] with Zip[Id] with Unzip[Id] with Align[Id] with Cozip[Id] = Id.id type Tagged[T] = {type Tag = T} . . .  How should I interpret type Tagged[T] = {type Tag = T} ? What does this mean ? Where is this Scala syntax described ? I am totally confused with this. What does it do ? Why ? Could someone give a super simple example that would explain this syntax for "dummies" ? ... and what it is good for ? ### TheoryOverflow #### Why Words(\square \varphi\rightarrow \lozenge \psi) \subseteq Words(\varphi\bigcup (\psi \vee \neg\varphi))? For an exercise we have to show \square \varphi\rightarrow \lozenge \psi \equiv \varphi\bigcup (\psi \vee \neg\varphi) using \varphi\equiv \psi \iff Words(\varphi) = Words(\psi). I understand the proof for Words(\square \varphi\rightarrow \lozenge \psi ) \subseteq Words(\varphi\bigcup (\psi \vee \neg\varphi)), but not the other way around as it is explained in the solution: Words(\varphi\bigcup (\psi \vee \neg\varphi)) ⊆ Words(\square \varphi\rightarrow \lozenge \psi): Let \sigma \in Words(\varphi\bigcup (\psi \vee \neg\varphi)). To show \sigma \in Words(\square \varphi\rightarrow \lozenge \psi), we assume that \sigma \models \square\varphi and prove that \sigma \models \lozenge\psi. Since by assumption \sigma \models \varphi\bigcup (\psi \vee \neg\varphi), at some point \psi \vee \neg\varphi must hold. Because of \sigma \models \square\varphi, this can only be the case if eventually \psi holds. Hence, \sigma \models \lozenge \psi. Emphasis mine The part at some point \psi \vee \neg\varphi must hold confuses me a bit. Why is this the case? We assume \varphi always holds, so the until \bigcup operator is satisfied no matter what, right? \square \varphi \rightarrow \varphi \bigcup false? ### /r/freebsd #### [HELP] Portsnap Problem Thanks for the help everyone. I guess it's just not feasible to run an up to date *nix on these systems. I decided to bust out my old sawtooth Powermac G4 to install a mumble and chat server on. I have FreeBSD 10.0 installed, but I am stuck there. Pkg for whatever reason didn't come with the base system (a REALLY REALLY BAD oversight if you ask me) and I need to use ports to install it. Problem is I can't install the ports tree. "portsnap fetch" downloads the ports snapshot, extracts it, then sits endlessly on "verifying snapshot integrity" for over 24 hours. I tried suspending the command and just trying to use "portsnap extract" and "portsnap update" but both complain about the ports not existing, and suggesting that I try "portsnap fetch," which never completes. How do I either disable the snapshot integrity verification, or install pkg without using ports? EDIT: I thought about trying to use screen or tmux so I could open top and see what's happening while the command is executing, but neither are in the base system. EDIT2: Over 30 hours later it finished. This is an unacceptable speed EDIT3: Why am I being downvoted for asking for some fucking help? submitted by [deleted] [link] [28 comments] ### /r/scala #### Looking for some initial feedback on Spray project Hi! I just started a Spray project after having worked on Play! for a little. Spray is pretty amazing but also I think I'm having a tough time figuring out the proper way to use the Actor model (aside from just the Http Service routers). I'd love if someone could peed around my initial project: https://github.com/fzakaria/addressme and offer some free advice :) I'm still figuring out a good way to incorporate a DOA layer using Slick submitted by Setheron [link] [12 comments] ### StackOverflow #### Is there any Scala feature that allows you to call a method whose name is stored in a string? Assuming you have a string containing the name of a method, an object that supports that method and some arguments, is there some language feature that allows you to call that dynamically? Kind of like Ruby's send parameter. #### Scala IDE Template Editor Broke I just downloaded the scala ide 4.0 release candidate 1 on my windows machine. I setup a basic play scala project and tried opening the the index.scala.html file with the New Play Editor and the file doesn't open.it looks like this: So then I tried opening the file in the regular play editor and when i type, the characters are typed in reverse: Anybody know how to go about fixing this? #### How eval defrecord generated by defmacro that's a really weird case! I've reached to a defrecord definition when I call my macro but I only get the code without the evaluation. (defmacro protocol-impl [protocol-definition] (defrecord ~(symbol "my-wrapper") [~(symbol "e#")] ~@(let [[type# protocol-functions#] ~protocol-definition] (conj (map (fn [[function-name# function-args#]] (~function-name# ~function-args# (~function-name# ~(symbol "e#") ~@(next function-args#)))) protocol-functions#) type#)) )) (protocol-impl (adapt-super-impls (first (get-methods (Example.))))) ;;=> (clojure.core/defrecord ;; my-wrapper ;; [e#] ;; wrapper.core.Welcome ;; (say_bye [this# a# b#] (say_bye e# a# b#)) ;; (greetings [this#] (greetings e#)))  if I try (my-wrapper. (Example.)) => Unable to resolve classname: my-wrapper  but if I eval the output generated by my macro call in nrepl the defrecord is evaluated fine Any ideas to get this macro working, or how could I work with my current macro output? here the gist with 70 lines of code Thanks in advance Juan PS: I know that this double  caused this behaviour but to define a defrecord you need all protocols and fns following in the same list definion, and I didn't find a better way to achieve it ### Lobsters #### Hex: a package manager for the Erlang ecosystem. #### Onyx: Distributed, fault tolerant data processing for Clojure #### Roshi: large-scale CRDT set implementation for timestamped events. #### Copycat: Protocol-agnostic implementation of the Raft consensus algorithm ### QuantOverflow #### Grad-level courses to take to prepare for quant roles? [on hold] I'm a MS student in computational science and want to work in big data/statistics, quantitative finance/HFT, or scientific programming/numerical modeling afterwards. I am currently using Unix, Linux and C++ for a research project for my master's thesis. I also know the basics of Matlab and R. I have taken undergrad courses in numerical analysis and probability/stats and grad-level coursework in numerical linear algebra. I just need to take 1 grad course next semester and then I'll graduate. Should I take more anyways? The courses I am thinking about taking next semester are below. I'm thinking the Parallel Algorithms, Regession, statistical methods or Bayesian Statistics methods courses may be best. Since I'm more interested in stats than programming, but haven't taken any grad-level stats courses, I'm thinking it may be best to take a stats course: 1. Design and Analysis of Experiments in Stats - layouts, variance, factorial experiments, block designs, classifications, fixed, random, and mixed models. 2. Parallel Algorithms - High performance and parallel computation. I'd say this is most relevant to my current research project 3. Bayesian Statistics methods - Markov chain monte carlo, Bayes models. 4. Applied Regression - linear/logistic regression, residuals, data analysis. 5. Numerical analysis of DEs - with cover numerically solving ODEs and PDEs. I'm thinking this won't be that helpful since my current numerical analysis course covers this, but very lightly 6. Statistical methods II - random and mixed effects models, time series, survival analysis, Bayesian methods, and MANOVA. Emphasis on using statistical software. ### CompsciOverflow #### Optimizing Permutation Algorithm I'm looking at Permutations Algorithm, the complexity of this algorithm is pretty high O(N*N!). The algorithm is based on this observations. GeneratePermutations("ABC")= [ABC; ACB; BAC ; BCA ; CBA ; CAB]  ABC / | \ ABC BAC CBA / \ / \ / \ ABC ACB BAC BCA CBA CAB def permutations(head, tail=''): if len(head) == 0: print tail else: for i in range(len(head)): permutations(head[0:i] + head[i+1:], tail+head[i])  Code is shared in this thread From the tree, I noticed that there is redundancy so I tried to add some techniques to memorize the results in order to reduce the cost. I am trying to find less-cost algorithm. I divided the work in two steps • Get all the possibilities encoded in numbers. In our example "ABC". The length is 3, so I should find all the permutations to [1,2,3] where max here is 111*max(myList) def hasAllNumbers(x): d={"1":False, "2":False:, "3":False} for item in list(str(x)): d[item]=True if False in d.values: return False else: return True g=[] def findPossibilities(): for x in xrange(0,333): if hasAllNumbers(x): g+=[x] return g  • The next step is mapping the indexes: considering s="ABC", when we have items like "321"; we will get :: s[ 3 ]+s[ 2 ]+s[ 1 ] = "CBA" Then by going through the list we produced (g) and structuring the letters based on the encoded indexes. perm=[] def getPermutations(): s="ABC" for item in g: d= s[item[0]]+s[item[0]]+s[item[0]] perm+=[d] return perm  The complexity of the algorithm O(n^2). I'd like to know if there is better approach to tackle this problem. It's hard to get lower complexity. Is there any better solution based on mathematical formula that I missed in order to get linear time solution. ### /r/compsci #### Routers, The Internet & YouTube Offline - Computerphile ### StackOverflow #### sbt stage: Not a valid command. W I am getting errors when I try to stage my application using 'sbt clean compile stage' ("Not a valid command"). I have done this hundreds of times on other machines without a problem. I have SBT 0.13.5 -- has anyone seen this before? I have read this other post, but I'm not on Heroku. Thanks. ### AWS #### OpenID Connect Support for Amazon Cognito This past summer, we launched Cognito to simplify the task of authenticating users and storing, managing, and syncing their data across multiple devices. Cognito already supports a variety of identities — public provider identities (Facebook, Google, and Amazon), guest user identities, and recently announced developer authenticated identities. Today we are making Amazon Cognito even more flexible by enabling app developers to use identities from any provider that supports OpenID Connect (OIDC). For example, you can write AWS-powered apps that allow users to sign in using their user name and password from Salesforce or Ping Federate. OIDC is an open standard enables developers to leverage additional identity providers for authentication. This way they can focus on developing their app rather than dealing with user names and passwords. Today's launch adds OIDC provider identities to the list. Cognito takes the ID token that you obtain from the OIDC identity provider and uses it to manufacture unique Cognito IDs for each person who uses your app. You can use this identifier to save and synchronize user data across devices and to retrieve temporary, limited-privilege AWS credentials through the AWS Security Token Service. Building upon the support for SAML (Security Assertion Markup Language) that we launched last year, we hope that today's addition of support for OIDC demonstrates our commitment to open standards. To learn more and to see some sample code, see our new post, Building an App using Amazon Cognito and an OpenID Connect Identity Provider on the AWS Security Blog. If you are planning to attend Internet Identity Workshop next week, come meet the members of the team that added this support! -- Jeff; ### TheoryOverflow #### Algebra oriented branch of theoretical computer science I have a very strong base in algebra, namely • commutative algebra, • homological algebra, • field theory, • category theory, and I am currently learning algebraic geometry. I am a math major with an inclination to switch to theoretical computer science. Keeping the above mentioned fields in mind, which field would be the most appropriate field in theoretical computer science to which to switch? That is, in which field can the theory and mathematical maturity obtained by pursuing the above fields be used to one's advantage? ### StackOverflow #### What does "::" means [duplicate] This question already has an answer here: I can't understand the meaning of "::". I thought that it means adding to the list. But then I saw "zip" function:  def zip[A,B](xs:List[A],ys:List[B]): List[(A,B)]= (xs,ys) match{ case(h1::t1, h2::t2) => (h1,h2)::zip(t1,t2) case_ => Nil }  It looks like comparing, but what exactly happens here I have no idea. Can someone explain me what "::" is doing in that function? Thanks in advice! ### /r/compsci #### Actor Model Of Computation: Scalable Robust Information Systems ### CompsciOverflow #### how to prove a language is decidable Hopefully this is not a duplicate How do I prove a Language L={a,b,c} is decidable or not I read somewhere that if a turing machine accepts a language and halts on every input string then the language is decidable. Having said that how do i design / prove that a turing machine accepts {a b c} I am new to concepts of automata and compelexity so please don't kill me if this is too basic for a question ### Dave Winer #### I'd rather see silo-free than ad-free Ello has taken a pledge to be ad-free. I'd prefer a stronger pledge, to make the pathways in and out, easy and open, always. That way I can hook it up to any flow I want in either direction. It's like have a fire exit in a movie theater. It's makes it possible for people to invest without fear. ### Lobsters #### EFF: Surveillance Self-Defense ### CompsciOverflow #### Flowcharts vs DFA resp FSM equivalency First I apologize if I confused therms DFA and FSM, to me it seems that is the same thing. The question is simple: Are the flowcharts (sequence, branching and jumping) equivalent to DFA resp. FSM? I am a bit confused about this. There are classes where using logical synthesis, Karunaugh maps, state encodings, flip flops etc. one is able to construct hardware consisting of logic gates and flip-flops which realizes the desired DFA. Basically all processes that runs on the computer (no matter if is written in C# or Assembler), are at the lowest level realized through logical gates, zeros and ones. So it seems that programs firstly needs to be converted (by compiler I suppose) to some form as I've described. This might imply that every problem that is solvable using C# is solvable using FSM. But this is in contradiction to Chomsky hierarchy and all this theory related stuff, which says that you cannot do the same magic with regular expressions (which are based on FSM) that you can do on Turing machine (which is equivalent of any programming language, if I am wrong correct me please). Moreover, if flowcharts (or even C#, Java ... source codes) were equivalent to FSM why we do not have all software formally verified so far? There is mathematical apparatus for FSM and related stuff, so why do not formally verify everything and ensure the correctness? What I am missing here? ### Jeff Atwood #### What If We Could Weaponize Empathy? One of my favorite insights on the subject of online community is from Tom Chick: Here is something I've never articulated because I thought, perhaps naively, it was understood: The priority for participating on this forum is not the quality of the content. I ultimately don't care how smart or funny or observant you are. Those are plusses, but they're never prerequisites. The priority is on how you treat each other. I expect spats, arguments, occasional insults, and even inevitable grudges. We've all done that. But in the end, I expect you to act like a group of friends who care about each other, no matter how dumb some of us might be, no matter what political opinions some of us hold, no matter what games some of us like or dislike. This community is small enough, intimate enough, that I feel it's a reasonable expectation. Indeed, disagreement and arguments are inevitable and even healthy parts of any community. The difference between a sane community and a terrifying warzone is the degree to which disagreement is pursued in the community, gated by the level of respect community members have for each other. In other words, if a fight is important to you, fight nasty. If that means lying, lie. If that means insults, insult. If that means silencing people, silence. I may be a fan of the smackdown learning model and kayfabe, but I am definitely not a fan of fighting nasty. I expect you to act like a group of friends who care about each other, no matter how dumb some of us might be, no matter what political opinions some of us hold, no matter what games some of us like or dislike. There's a word for this: empathy. One of the first things I learned when I began researching discussion platforms two years ago is the importance of empathy as the fundamental basis of all stable long term communities. The goal of discussion software shouldn't be to teach you how to click the reply button, and how to make bold text, but how to engage in civilized online discussion with other human beings without that discussion inevitably breaking down into the collective howling of wolves. That's what the discussion software should be teaching you: Empathy. You. Me. Us. We can all occasionally use a gentle reminder that there is a real human being on the other side of our screen, a person remarkably like us. I've been immersed in the world of social discussion for two years now, and I keep going back to the well of empathy, time and time again. The first thing we did was start with a solid set of community guidelines on civilized discussion, and I'm proud to say that we ship and prominently feature those guidelines with every copy of Discourse. They are bedrock. But these guidelines only work to the extent that they are understood, and the community helps enforce them. In Your Community Door, I described the danger of allowing cruel and hateful behavior in your community – behavior so obviously corrosive that it should never be tolerated in any quantity. If your community isn't capable of regularly exorcising the most toxic content, and the people responsible for that kind of content, it's in trouble. Those rare bad apples are group poison. Hate is easy to recognize. Cruelty is easy to recognize. You do not tolerate these in your community, full stop. But what about behavior that isn't so obviously corrosive? What about behavior patterns that seem sort of vaguely negative, but … nobody can show you exactly how this behavior is directly hurting anyone? What am I talking about? Take a look at the Flamewarriors Online Discussion Archetypes, a bunch of discussion behaviors that never quite run afoul of the rules, per se, but result in discussions that degenerate, go in circles, or make people not want to be around them. What we're getting into is shades of grey, the really difficult part of community moderation. I've been working on Discourse long enough to identify some subtle dark patterns of community discussion that – while nowhere near as dangerous as hate and cruelty – are still harmful enough to the overall empathy level of a community that they should be actively recognized when they emerge, and interventions staged. ### 1. Endless Contrarianism Disagreement is fine, even expected, provided people can disagree in an agreeable way. But when someone joins your community for the sole purpose of disagreeing, that's Endless Contrarianism. Example: As an athiest, Edward shows up on a religion discussion area to educate everyone there about the futility of religion. Is that really the purpose of the community? Does anyone in the community expect to defend the very concept of religion while participating there? If all a community member can seem to contribute is endlessly pointing out how wrong everyone else is, and how everything about this community is headed in the wrong direction – that's not building constructive discussion – or the community. Edward is just arguing for the sake of argument. Take it to debate school. ### 2. Axe-Grinding Part of what makes discussion fun is that it's flexible; a variety of topics will be discussed, and those discussions may naturally meander a bit within the context defined by the site and whatever categories of discussion are allowed there. Axe-Grinding is when a user keeps constantly gravitating back to the same pet issue or theme for weeks or months on end. Example: Sara finds any opportunity to trigger up a GMO debate, no matter what the actual topic is. Viewing Sara's post history, GMO and Monsanto are constant, repeated themes in any context. Sara's negative review of a movie will mention eating GMO popcorn, because it's not really about the movie – it's always about her pet issue. This kind of inflexible, overbearing single-issue focus tends to drag discussion into strange, unwanted directions, and rapidly becomes tiresome to other participants who have probably heard everything this person has to say on that topic multiple times already. Either Sara needs to let that topic go, or she needs to find a dedicated place (e.g. GMO discussion areas) where others want to discuss it as much as she does, and take it there. ### 3. Griefing In discussion, griefing is when someone goes out of their way to bait a particular person for weeks or months on end. By that I mean they pointedly follow them around, choosing to engage on whatever topic that person appears in, and needle the other person in any way they can, but always strictly by the book and not in violation of any rules… technically. Example: Whenever Joe sees George in a discussion topic, Joe now pops in to represent the opposing position, or point out flaws in George's reasoning. Joe also takes any opportunity to remind people of previous mistakes George made, or times when George was rude. When the discussion becomes more about the person than the topic, you're in deep trouble. It's not supposed to be about the participants, but the topic at hand. When griefing occurs, the discussion becomes a stage for personal conflict rather than a way to honestly explore topics and have an entertaining discussion. Ideally the root of the conflict between Joe and George can be addressed and resolved, or Joe can be encouraged to move on and leave the conflict behind. Otherwise, one of these users needs to find another place to go. ### 4. Persistent Negativity Nobody expects discussions to be all sweetness and light, but neverending vitriol and negativity are giant wet blankets. It's hard to enjoy anything when someone's constantly reminding you how terrible the world is. Persistent negativity is when someone's negative contributions to the discussion far outweigh their positive contributions. Example: Even long after the game shipped, Fred mentions that the game took far too long to ship, and that it shipped with bugs. He paid a lot of money for this game, and feels he didn't get the enjoyment from the game that was promised for the price. He warns people away from buying expansions because this game has a bad track record and will probably fail. Nobody will be playing it online soon because of all the problems, so why bother even trying? Wherever topics happen to go, Fred is there to tell everyone this game is worse than they knew. If Fred doesn't have anything positive to contribute, what exactly is the purpose of his participation in that community? What does he hope to achieve? Criticism is welcome, but that shouldn't be the sum total of everything Fred contributes, and he should be reasonably constructive in his criticism. People join communities to build things and celebrate the enjoyment of those things, not have other people dump all over it and constantly describe how much they suck and disappoint them. If there isn't any silver lining in Fred's cloud, and he can't be encouraged to find one, he should be asked to find other places to haunt. ### 5. Ranting Discussions are social, and thus emotional. You should feel something. But prolonged, extreme appeal to emotion is fatiguing and incites arguments. Nobody wants to join a dry, technical session at the Harvard Debate Club, because that'd be boring, but there is a big difference between a persuasive post and a straight-up rant. Example: Holly posts at the extremes – either something is the worst thing that ever happened, or the best thing that ever happened. She will post 6 to 10 times in a topic and state her position as forcefully as possible, for as long and as loud as it takes, to as many individual people in the discussion as it takes, to get her point across. The stronger the language in the post, the better she likes it. If Holly can't make her point in a reasonable way in one post and a followup, perhaps she should rethink her approach. Yelling at people, turning the volume to 11, and describing the situation in the most emotional, extreme terms possible to elicit a response – unless this really is the worst or best thing to happen in years – is a bit like yelling fire in a crowded theater. It's irresponsible. Either tone it down, or take it somewhere that everyone talks that way. ### 6. Grudges In any discussion, there is a general expectation that everyone there is participating in good faith – that they have an open mind, no particular agenda, and no bias against the participants or the topic. While short term disagreement is fine, it's important that the people in your community have the ability to reset and approach each new topic with a clean(ish) slate. When you don't do that, when people carry ill will from previous discussions toward the participants or topic into new discussions, that's a grudge. Example: Tad strongly disagrees with a decision the community made about not creating a new category to house some discussion he finds problematic. So he now views the other leaders in the community, and the moderators, with great distrust. Tad feels like the community has turned on him, and so he has soured on the community. But he has too much invested here to leave, so Tad now likes to point out all the consequences of this "bad" decision often, and cite it as an example of how the community is going wrong. He also follows another moderator, Steve, around because he views him as the ringleader of the original decision, and continually writes long, critical replies to his posts. Grudges can easily lead to every other dark community pattern on this list. I cannot emphasize enough how important it is to recognize grudges when they emerge so the community can intervene and point out what's happening, and all the negative consequences of a grudge. It's important in the broadest general life sense not to hold grudges; as the famous quote goes (as near as I can tell, attributed to Alcoholics Anonymous) Holding a grudge is like drinking poison and expecting the other person to die. So your community should be educating itself about the danger of grudges, the root of so many other community problems. But it is critically important that moderators never, and I mean never ever, hold grudges. That'd be disastrous. ### What can you do? I made a joke in the title of this post about weaponizing empathy. I'm not sure that's even possible. But you can start by having clear community guidelines, teaching your community to close the door on overt hate, and watching out for any overall empathy erosion caused by the six dark community behavior patterns I outlined above. At the risk of sounding aspirational, here's one thing I know to be true, and I advise every community to take to heart: I expect you to act like a group of friends who care about each other, no matter how dumb some of us might be, no matter what political opinions some of us hold, no matter what things some of us like or dislike.  [advertisement] Stack Overflow Careers matches the best developers (you!) with the best employers. You can search our job listings or create a profile and even let employers find you. ### /r/compsci #### Assignment help: Logic(Satisfiability) Hi, first year compsci student here and I'm completely stuck on this question I have to do on my assignment. The Question is on, as the title states, on logic, particularly on satisfiability. It says: Due to budgets cuts, a elementary school has to accommodate more children. The children are seated in benches which fit two children. The principal, angry with the situation, goes to the education ministry with a bench(Which can only seat 2 children) and 3 children and says to the minister: "3 children don't fit on that bench, I'll prove it to you", and manages to prove it. We have to give a "logical justification" of how he could prove it, but I'm kinda lost. I guess we have to use DPLL, Tableaux or Resolution, but I don't know how to formalize a bench and 3 kids. Any input would be greatly appreciated, this has got me completely dumbfounded. Thanks for the help! submitted by Tequoia [link] [2 comments] #### New Evidence of the NSA Deliberately Weakening Encryption ### /r/netsec #### An introduction to threat modeling, by the EFF ### StackOverflow #### Count survival to selection with functional programming I would like to rewrite better my code using functional programming, but I am not so expert. I have a collection of pointer to items: std::vector<Item*>* collection;  I need to apply a selection to the items. The selection is made by several steps, everyone coded inside a function: bool pass_step0(const Item& item);  For every step I need to know how many elements survive the selection. One element survive the selection if is pass the and of all the steps. So this is what I am doing std::vector<int> nsurvivals_after_step; std::vector<bool> mask(collection->size(), true); int i = 0; for (item : *collection) { if (not(pass_step0(*item)) { mask[i] = false; } ++i; } nsurvivals.append(count(mask.begin(), mask.end(), true)); i = 0; for (item : *collection) { if (not(pass_step1(*item)) { mask[i] = false; } ++i; } nsurvivals.append(count(mask.begin(), mask.end(), true));  There are many repetition in the code and I have to use a global index i. How to use more sophisticated C++ feature to improve it? ### QuantOverflow #### Intermarket analysis - related time series? I'm about to embark on training a neural network on daily forex data, with a view to obtaining a predictive network. I'm also interested in using data other than the forex currency pair data itself, in a manner similar to intermarket analysis. What other time series data does the panel think will provide meaningful input? Obviously, various other forex cross rates are important, along with perhaps interest rate time series. But what about perhaps less intuitively obvious time series? I'm more interested in time series that have a justifiably fundamental reason for inclusion rather than those that might simply exhibit historical correlation. Links to online references/papers, e.g. SSRN etc. would be very welcome. ### StackOverflow #### Command scoping or resolution in SBT Build.Scala My question is why sbt is not locating commands when using a multiproject build. My plugin resembles object MyPlugin extends Plugin { lazy val plug = Seq( commands ++= Seq(versionWriteSnapshotRelease) ) def versionWriteSnapshotRelease = Command.command( "versionWriteSnapshotRelease", "Writes the release format of the snapshot version. This is used to preserve the actual snapshot version in a release commit.", "" ) { state => .... } }  I have my Build.scala file which resmembles lazy val app = Project(id = "app-subproject", base = file("app")) .settings(MyPlug.plug :_*) .settings(...) lazy val common = Project(id = "library-subproject", base = file("common")) .settings(MyPlug.plug :_*) .settings(...)  With files laid out like root |_ common |_ src |_ app |_ src  This configurations fails with an error like [error] Not a valid command: versionWriteSnapshotRelease [error] Not a valid project ID: versionWriteSnapshotRelease [error] Expected ':' (if selecting a configuration) [error] Not a valid key: versionWriteSnapshotRelease (similar: version, ...) [error] versionWriteSnapshotRelease  However if I restructure to something like  lazy val app = Project(id = "app-subproject", base = file(".")) .settings(MyPlug.plug :_*) .settings(...) lazy val common = Project(id = "library-subproject", base = file("common")) .settings(MyPlug.plug :_*) .settings(...)  With files laid out like root |_ common |_ src |_ src  Then it works. Note that my change is to put the app project /src in the basedir and set the app project to have base "." This plugin is used across multiple projects and has no issue when the file layout is in the second form. So I know it isn't an issue with the plugin per-se. This seems to have something to do with the scoping of the commands, but I'm not sbt savy enough to know where to start ### /r/netsec #### Diskless true SSH honeypot using Alpine Linux ### DataTau #### The Weather Channel's Secret: Less Weather, More Clickbait ### Planet FreeBSD #### FreeBSD 10.1-RC3 Now Available The third RC build of the 10.1-RELEASE release cycle is now available on the FTP servers for the amd64, armv6, i386, ia64, powerpc, powerpc64 and sparc64 architectures. The image checksums follow are included in the original announcement email. Installer images and memory stick images are available here. If you notice problems you can report them through the Bugzilla PR system or on the -stable mailing list. If you would like to use SVN to do a source based update of an existing system, use the "releng/10.1" branch. A list of changes since 10.0-RELEASE are available here. Changes between 10.1-RC2 and 10.1-RC3 include: • Several fixes to the UDPLite protocol implementation. • The vt(4) driver has been updated to save and restore keyboard mode and LED states when switching windows. • Several fixes to the SCTP protocol implementation. • A potential race condition in obtaining a file pointer has been corrected. • Fix ZFS ZVOL deadlock and rename issues. • Restore libopie.so ABI compatibility with 10.0-RELEASE. • Removed the last vestige of MD5 password hashes. • Several rc(8) script updates and fixes. • bsdinstall(8) has been updated to allow selecting local_unbound in the default services to enable at first boot. • Prevent ZFS leaking pool free space. • Fix rtsold(8) remote buffer overflow vulnerability. [SA-14:20] • Fix routed(8) remote denial of service vulnerability. [SA-14:21] • Fix memory leak in sandboxed namei lookup. [SA-14:22] • OpenSSL has been updated to version 1.0.1j. [SA-14:23] • Fix an issue where a FreeBSD virtual machine provisioned in the Microsoft Azure service does not recognize the second attached disk on the system. Pre-installed virtual machine images for 10.1-RC3 are also available for amd64 and i386 architectures. The images are located here. The disk images are available in QCOW2, VHD, VMDK, and raw disk image formats. The image download size is approximately 135 MB, which decompress to a 20GB sparse image. The partition layout is: • 512k - freebsd-boot GPT partition type (bootfs GPT label) • 1GB - freebsd-swap GPT partition type (swapfs GPT label) • ~17GB - freebsd-ufs GPT partition type (rootfs GPT label) To install packages from the dvd1.iso installer, create and mount the /dist directory: # mkdir -p /dist # mount -t cd9660 /dev/cd0 /dist Next, install pkg(8) from the DVD: # env REPOS_DIR=/dist/packages/repos pkg bootstrap At this point, pkg-add(8) can be used to install additional packages from the DVD. Please note, the REPOS_DIR environment variable should be used each time using the DVD as the package repository, otherwise conflicts with packages from the upstream mirrors may occur when they are fetched. For example, to install Gnome and Xorg, run: # env REPOS_DIR=/dist/packages/repos pkg install \ xorg-server xorg gnome2 [...] The freebsd-update(8) utility supports binary upgrades of amd64 and i386 systems running earlier FreeBSD releases. Systems running earlier FreeBSD releases can upgrade as follows: # freebsd-update upgrade -r 10.1-RC3 During this process, freebsd-update(8) may ask the user to help by merging some configuration files or by confirming that the automatically performed merging was done correctly. # freebsd-update install The system must be rebooted with the newly installed kernel before continuing. # shutdown -r now After rebooting, freebsd-update needs to be run again to install the new userland components: # freebsd-update install It is recommended to rebuild and install all applications if possible, especially if upgrading from an earlier FreeBSD release, for example, FreeBSD 8.x. Alternatively, the user can install misc/compat9x and other compatibility libraries, afterwards the system must be rebooted into the new userland: # shutdown -r now Finally, after rebooting, freebsd-update needs to be run again to remove stale files: # freebsd-update install Love FreeBSD? Support this and future releases with a donation to the FreeBSD Foundation! ### CompsciOverflow #### Which method for ODE instead of Euler's? I need a super-fast method for ordinary differential equations. Should I use the midpoint method? I need this for a reaction-diffusion system (Gray-Scott). ### Planet Clojure #### The perfect match A talk about pattern matching by János Erdős ### QuantOverflow #### Source on pricing / valuation of trust preferred securities? Is there a good source on pricing / valuation of trust preferred securities? I used GOOGLE, GOOGLE SCHOLAR and NEW YORK PUBLIC LIBRARY, but the results were meager. Found book Handbook of Hybrid Securities by de Spiegeleer, but the discussion of trups is brief and not focused on valuation. ### StackOverflow #### pyzmq: how do you filter at publisher side On http://zguide.zeromq.org/ it says "Pub-sub filtering is now done at the publisher side instead of subscriber side. This improves performance significantly in many pub-sub use cases. You can mix v3.2 and v2.1/v2.2 publishers and subscribers safely." And I am following examples on http://learning-0mq-with-pyzmq.readthedocs.org/en/latest/pyzmq/patterns/pubsub.html and filtering happens on subscriber side. How do I filter on publisher side? Note: I have pyzmq-14.3.1 #### Why can you define function without parameter in haskell I have function add which I apply partially to create a new function addOne. add :: Int -> (Int -> Int) add x y = x + y  addOne can be defined with explicit parameter addOne :: Int -> Int addOne y = add 1 y  or without explict parameter addOne :: Int -> Int addOne = add 1  I have four questions: 1. Why can I define the new function without explicit parameter? 2. Is there any difference between these two definitions? 3. When do I know when I can define function without a parameter? 4. Which definition is preferred and when? ### /r/compsci #### Argonne YouTube videos a rich vein for CompSci and HPC training and background. Crossposted to /r/HPC ### Dave Winer ### /r/systems #### New demos of the BFQ I/O scheduler Hi, I have just uploaded on youtube two new demos in which we compare the last version of the BFQ I/O scheduler with CFQ, DEADLINE and NOOP, on both and SSD and an HDD. Differently from our previous demos, this time we show also the performance of DEADLINE and NOOP in terms of application responsiveness and I/O latency. Here are the links: http://youtu.be/1cjZeaCXIyM http://youtu.be/ZeNbS0rzpoY submitted by paolovalente [link] [2 comments] ### CompsciOverflow #### Prove that the language is not regular without using Pumping Lemma I am practising problems on Regular Languages and I came across this question: Prove that the language$$\{a^m b^n : m ≥ 0, n ≥ 0, m \ne n\}$$is not regular. (Using the pumping lemma for this one is a bit tricky. You can avoid using the pumping lemma by combining results about the closure under regular operations.) I have tried to prove it using pumping lemma in the following way: Let p be a sufficiently large integer, then we construct the string:$$s = a^pb^{p + p!}$$Now by pumping lemma conditions, the string s can be written as xyz where |xy| \le p. Hence xy contains only a's. If we choose any substring y of length k \le p from xy, we can always find a C such that$$p+C*k = p + p!$$We can also prove it if we choose$s$to be just the single character string$a$and then we pump down. Q1. Please let me know if there is a flaw in the above proofs. Q2. How can closure properties be applied to prove the above? Till now I have applied closure properties to prove regularity, but never the converse. ### StackOverflow #### Scala generic arrays instantiating I have a function that makes Arrays of specific type: def mkArray[A:ClassTag:Ordering](size:Int):Array[A] = Array.ofDim[A](size)  And I want to make array arr of type Int or String depending on String str like so: var arr = if(str=="i"){mkArray[Int](size)}else{mkArray[String](size)}  and now I try to add values to the array like so: arr(n) = num.toInt // num is String like "123"  But it says: - type mismatch; found : Int required: _366 where type _366 >: Int with String  How can I get around this, making arr of type Array[Int] or Array[String] depending on string str? Any help is appreciated, Thanks! ### QuantOverflow #### is there an accepted method for quantifying risk of inaccuracy of nascent trm systems? Have a somewhat meta question here. I am part of a trading risk management implementation project. I also manage day to day risk reporting to management and the trading desks. Our implementation was successful in that the risk modeling is much more accurate than the spreadsheets. However there are still issues with the system that turn up and create an impact to pnl for the books. The swings up and down due to system issues send a bad message about the amount of market risk that is being taken on I'd like to apply something like a haircut to the book value and smooth out these technology related issues, but I'm not sure if that is an acceptable business practice. Assume that you have no say or input on the state of the system itself ### StackOverflow #### Scala: Best way to parse command-line parameters (CLI)? What's the best way to parse command-line parameters in Scala? I personally prefer something lightweight that does not require external jar. Related: ### Lobsters #### Consensus Protocols: Two-Phase Commit ### Planet Clojure #### Weekly Update: Talk Transcripts, Clojure Architecture, OS X Yosemite As I have no other article to publish this week, I thought a weekly update would be in order. Last week I wrote about making relevant and interesting talks more accessible. In the course of that project, I had eleven talks transcribed so far, four more than when I announced the project last week. Not only have I received great feedback about how appreciated this is, I have also learned a lot myself while proofreading the transcripts. With all the stuff that I have learned and that I am still learning (with a few more talks in the pipeline), there are a couple of things that I want to rethink regarding the architecture of my BirdWatch application before I continue with describing the architecture further. So let me think first before I publish the next article on the application’s architecture. No worries, I expect to have the next one out next week, or the week after that the lastest. ## Thoughts from Guy Steele’s talk on Parallel Programming The talk that got me thinking the most about the BirdWatch application’s architecture is Guy Steele’s talk about Parallel Programming. Not only does he give a great explanation of the differences between parallelism and concurrency, he also gives great insights into the use of accumulators. So what, according to him, is concurrency? Concurrency is when multiple entities, such as users or processes, compete for scarce resources. In that case, we want efficient ways of utilizing the scarce resources (CPU, memory bandwidth, network, I/O in general) so that more of the entities can be served simultaneously time on the same box or number of boxes. Parallelism, on the other hand, is when there are vast resources and we want to allocate as many of them as possible to the same number of entities. For example we could have a CPU-bound operation, a single user and 8, 12 or however many cores. If the operation is single-threaded, we won’t be able to utilize the resources well at all. We could, of course, split up the computation so that it runs on all the cores (maybe even on hundreds of boxes and thousands of cores), but that’s easier said than done. Which brings me to accumulators. The accumulator, as the name suggests, is where intermediate results are stored while a computation is ongoing. As Guy points out, this has served us extremely well for as long as we didn’t have to think about parallelism. If the computation happens serially in a single thread, the accumulator is great, but what do we do when we want to spawn 20 threads on a 32-core machine, or 1000 thread on 100 machines? If each of them had to work with the same accumulator, things would become messy and the accumulator would become the source of contention, with all kinds of ugly coordination and locking. That doesn’t scale at all. Guy suggests using divide-and-conquer instead so that each process in a parallelized approach only creates a partial result which will be combined with other partial results later. He argues for MapReduce in the small in addition to MapReduce in the large. I think this makes a lot of sense. That way, the partial results are created in the map phase on a number of threads (potentially on many machines) and the reduction is where the partial results are combined into a final result. I had been thinking along these lines for a while already when thought about moving parts of the computation in BirdWatch for previous tweets (wordcount, retweet count, reach,…) to the server side as the current approach uses way more network bandwidth than strictly necessary. I was mostly thinking about it in terms of mergeability between partial results, which implies that the merge operation between two partial results is both associative and commutative. To explain associative, let’s say we have partial results A, B, C, D and we can merge them in any way we want, for example (A + B) + C + D or A + (B+ (C + D)) or whatever. As another example, let’s say you have a script with 100 pages in 10 stacks. It doesn’t matter in which way we build intermediate piles as long as we only merge neighboring piles so that the pile with the higher page count goes under the one with the lower page count. Commutative means that order does not matter. For example, these are all the same: 11 + 5 + 16 + 10 and 10 + 16 + 5 + 11 are the same - both add up to 42. After listening to Guy Steele’s talk and proof-reading the transcript, I don’t want to push the redesign any further but instead tackle it right away. I think it should be possible to divide the aggregation tasks in BirdWatch in smaller chunks that can then be combined in an associative and commutative way on the client (in ClojureScript), and I have an idea of how to do that. But let me get back into the hammock1 and ponder that idea some more. I’ll certainly let you know what I come up with. ## Update to OS X Yosemite Last weekend I updated my production laptop to Yosemite. Of course, I did a full backup with Carbon Copy Cloner first and I also made sure that my old backup laptop was still working before I embarked on the update adventure, just in case. That turned out to be a good idea. The system upgrade did not cause any actual trouble, all went smoothly and I also think that the new design looks great. BUT IT TOOK FOREVER. The time estimation was so off, it was worse than the worst Windows installation experiences ever. Overall it probably took six or seven hours. Apparently, this had to do with homebrew, check out this article for more information2. Luckily I had read about the upgrade taking longer in a forum somewhere, so I wasn’t too worried and just let the installer do its thing. If you plan on doing the upgrade, I think it will be worth it, but only do it when you don’t need your machine for a while, like overnight (or you follow the instructions in the article above). All works nicely on my machine now as well, even without doing anything special, just with the consequence of giving me a free afternoon because of not being able to get any work done. Also, you can press CMD-l to get a console output, which I found much more reassuring than having the installer tell me it’ll need another 2 minutes that turn into 2 hours. ## Conclusion Okay, that’s it for today. There are some additions to the Clojure Resources project and I have also added links to the talk transcripts in there. Please check out the talk-transcripts if you haven’t done so already. I would love to hear from you if any of these transcripts helped you at all and made the content more accessible than it would have been otherwise. Until next week, Matthias 1. If you’ve never listened to Rich Hickey’s talk about Hammock-driven development, you really should. Now there’s also a transcript for that talk. You find the link to the video recording alongside the transcript. 2. Thanks to @RobStuttaford for pointing this out. ### StackOverflow #### sbt compile step running multiple times for ~run I am trying to run a custom task before compilation of a Play 2.3 application. I have this in my build.sbt file: lazy val helloTask = TaskKey[Unit]("hello", "hello") helloTask := { println("hello test") } (compile in Compile) <<= (compile in Compile) dependsOn helloTask  When I run activator ~run and then load a page, I get the following output: C:\Development\test>activator ~run [info] Loading project definition from C:\Development\test\project [info] Set current project to play (in build file:/C:/Development/test/) --- (Running the application from SBT, auto-reloading is enabled) --- [info] play - Listening for HTTP on /0:0:0:0:0:0:0:0:9000 (Server started, use Ctrl+D to stop and go back to the console...) hello test [success] Compiled in 418ms hello test hello test [info] play - Application started (Dev)  It seems my custom task is running three times. Is there a way I can avoid this? #### Is it possible to configure FreeNAS to automatically reflect Adding user in AD and need to create a dataset for the new user Is it possible to configure FreeNAS to automatically reflect Adding user in AD and need to create a dataset for the new user. ### DataTau #### Big Data, Hype, the Media and Other Provocative Words to Put in a Title ### CompsciOverflow #### Prove that there always exists a fair driving schedule [on hold] Some people agree to carpool, but they want to make sure that any carpool arrangement is fair and doesn't overload any single person with too much driving. Some scheme is required because none of them goes to work every day, and so the subset of them in the car varies from day to day. Let the people be labeled in a set$S = \{p_1,...,p_k\}$. We say that the total driving obligation of$p_j$over a set of days is the expected number of times that$p_j$would have driven, had a driver been chosen uniformly at random from among the people going to work each day. That is, suppose the carpool plan lasts for$d$days, and on the$i^{th}$day a subset$S_i \subseteq S$of the people go to work. Then the total driving obligation$\delta_j$for$p_j$can be written as$\delta_j = \sum_{i:p_j\in S_i} \frac{1}{|S_i|}$. Ideally, we'd like to require that$p_j$drives at most$\delta_j$times, but$\delta_j$may not be an integer. A driving schedule is a choice of a driver for each day$-$a sequence$p_{i_1}, p_{i_2},...,p_{i_d}$with$p_{i_t}\in S_t-$and that a fair driving schedule is one in which$p_j$is chosen as a driver on at most$\lceil \delta_j \rceil$days. 1. Prove that for any sequence of sets$S_1,...,S_d$, there exists a fair driving schedule. I'm finding this question very difficult to answer. My intuition tells me that there should always be a fair driving schedule, but I don't know how to prove that. I was thinking that we could do things inductively, but the problem breaks down after the base case. For example, consider the base case being$S_1$. Then there always exists a fair driving schedule because every driver$\in S_1$can only drive at most once. After the base case, if we have just$S_1$and$S_2$, we run into trouble because$S_1$and$S_2$can be completely different, very similar, or mixed. How can we show that there is always a fair driving schedule for just two lists? Generally, how can we extend this to$d$lists? ### Planet Clojure #### Results of 2014 State of Clojure and ClojureScript Survey Update 10/24/14: this was the raw results, you can see the analysis of the results here. The 2014 State of Clojure and ClojureScript Survey was open from Oct. 8-17th. The State of Clojure survey (which was applicable to all users of Clojure, ClojureScript, and ClojureCLR) had 1339 respondents. The more targeted State of ClojureScript survey had 642 respondents. Reports with charts from some of the survey questions and links to the raw data (see "Export Data" in upper right corner of each report) are here: Those reports contain charts for all but the grid-style and text response questions. You can find all of the text responses (extracted and sorted) for the text questions here: You may wish to refer back to the 2013 or 2012 survey results as well! ### Dave Winer #### Twitter's announcements from a web developer's perspective I watched most of yesterday's press announcements about Twitter's new toolkits for developers. I know they can't do everything, but I was surprised that they're more or less leaving the Twitter API as-is, at least based on what I heard yesterday. There are so many people I'd like to gossip with this about, and I know I won't get the chance, so here's a blog post instead. I develop in JavaScript in the browser and on the server in the node.js environment. Between these two platforms, you need a lot of glue to connect a UI in the browser to services running on twitter.com. It could be a lot simpler, as the JavaScript toolkits from Facebook and Dropbox are, two other services I've adapted to. They do more of the work for you. And they're all missing some of what the others do. I really want a service that does what all three of them do. Every app needs identity and storage. Not necessarily a lot of storage. A megabyte is a lot of space for an outline, which is basically an XML document. With the amount of space used by a few pictures, something Twitter and Facebook already do for users, without limits, a huge range of interesting apps could be written. Today we have to do that for ourselves. It would be easier for everyone if the platforms did it too. There's a big void out there, someone should fill it. If Twitter had it would have made 2015 a more interesting year, imho. ### TheoryOverflow #### Is the SAT variant where exactly k variables must be set to true known? [on hold] Consider the following satisfiability variant: Given a CNF formula$F$and an integer$k$, decide if there is an assignment$\phi$such that$F$is satisfied under$\phi$, and$\phi$sets exactly$k$variables true. Is this is a known problem? What is its complexity? ### StackOverflow #### Play Framework WS.url stuck forever I've been banging my head against the wall debugging a production issue which I managed to downsize to the following side-test: def test = Action.async { request => WS.url("https://linklyapp.com/pricing?utm_content=buffer2f4a8&utm_medium=social&utm_source=facebook.com&utm_campaign=buffer") .withRequestTimeout(3000) .withFollowRedirects(false) .get .map { response => Logger.debug("Got a response") Ok(response.body) } .recover { case e : Throwable => Logger.error("error", e) BadRequest("Couldn't open") } }  For some reason this never returns. I don't see the debug print or the error. A timeout is set. I also tried setting the timeout via configuration. No difference. Any suggestions? #### From Scala to Java 1.8 I would like to write a Spark program that parses a csv log file, splits the words by a separator ";" and creates an object, who's attribute values are words located on specific positions. The code would look like this in Scala but I am having trouble in translating this to Java 1.8 (I would like to use the lambda expressions in Java). val file = sc.textFile("hdfs:/../vrLogs.csv") class VREvent(val eventTimestamp: String, val deviceID: String, val eventType: String, val itemGroupName: String) val vrEvents = file.map(_.split(';')).filter(_.size == 32).map(a => new VREvent(a(0), a(1), a(6), a(13)))  I am not sure how to translate this part to Java: .map(a => new VREvent(a(0), a(1), a(6), a(13))). I tried this (without the filter part): JavaRDD<String> records = lines.flatMap(s -> Arrays.asList(s.split(";"))).map(a -> new CDREvent(a[0], a[1], a[6], a[13]));  #### Scala/ AKKA actor migration between machines Can running actors be migrated to a different node during their life cycle? According to this AKKA road map here automatic actor migration upon failure would be available with release "Rollins". I was wondering whether this actor migration can somehow be done manually, via some special message or anything? Furthermore, is there anything similar to this in Scala? #### What options exist for object mapping in scala? I wish to copy (in a nested manner) values from one object tree to another. In Java I would have used something like Orika. My particular use case is building a sequence of message deltas to generate a latest state. #### How to solve this programming situation using Clojure in a functional manner? I have a programming problem that I know how I might tackle in Ruby, but don’t know the best way in Clojure (and figure there may be an elegant way to do this taking a functional mindset). The problem can be simplified as thus: I have a 3 litre bucket, filled with water. There is a hole in the bottom of the bucket, that is leaking 10 mL/s (i.e. it will take 300 seconds / 5 minutes to empty). I have a glass of water with a 100 mL capacity that I can use to pour in new water to the bucket. I can only pour the entire contents of the glass into the bucket, no partial pours. The pour occurs instantaneously. Project out a set of time steps where I can pour glasses of water into the bucket. I know there is a pretty obvious way to do this using algebra, but the actual problem involves a “leak rate” that changes with time, and "new glass volumes" that don't always equal 100 mL and as such isn’t simple to solve in a closed form. The Ruby way to solve this would be to keep track of the bucket volume using a “Bucket instance”, and test at numerous time steps to see if there bucket has 100 mL of room. If so, dump the glass, and add to the water in the “bucket instance”. Continue the time steps, watching the bucket volume. I hope what I have described is clear. ### /r/clojure #### Results of 2014 State of Clojure and ClojureScript Survey ### StackOverflow #### Akka resizer tries to resize more than upperBound I am using a RoundRobinPool with a dynamic resizer using the following code:  val resizer = DefaultResizer(lowerBound = 1, upperBound = 2) lazy val router = { context.actorOf( RoundRobinPool(nrOfInstances = 1, resizer = Some(resizer), supervisorStrategy = SupervisorStrategy.defaultStrategy) .props(MyActor.props())) }  But the problem is that the router is resizing the pool to much more than 2, basically it never stops creating actors until it crashes. Everything "works' if lowerBound == upperBound, but of course the pool has a constant size. Could this be a bug in Akka or am I doing something wrong here? Thank you ### Portland Pattern Repository #### Real World (by dslb-178-003-158-169.178.003.pools.vodafone-ip.de 34 hours ago) #### Dilbert Obvious (by dslb-178-003-158-169.178.003.pools.vodafone-ip.de 31 hours ago) ### StackOverflow #### What's up with range position in scala def macros? Similar to this: What's up with position on Scala macros? I'm using -Yrangepos, macro paradise(2.1.0-M1) and scala(2.11.2) When I'm using annotation macros, trees position return range position. The problem is that def macros such as the following only return offset position. def macro_impl[T](c: Context)(code: c.Expr[T]): c.Expr[Unit] = { println(code.tree.pos.start, code.tree.pos.end) // => start == end c.Expr[Unit](q"()") // w/e }  How can I get range position from the code tree ? EDIT: I reduced the problem to the following project https://github.com/MasseGuillaume/def-macros-pos-issue // Build scalacOptions ++= Seq("-Yrangepos", "-unchecked", "-deprecation", "-feature") // Macros import scala.language.experimental.macros import scala.reflect.macros.Context object Macros { def impl[T](c: Context)(code: c.Expr[T]) = { import c.universe._ implicit def liftq = Liftable[c.universe.Position] { p ⇒ q"(${p.point}, ${p.end})" } c.Expr[(Int, Int)](q"${code.tree.pos}")
}

def pos[T](code: T): (Int, Int) = macro impl[T]
}


I do get a range, but it's not precise enought:

object Test extends App {
val pos = Macros.pos {
1
}
println(pos) // (55,56)
}

// that's
object Test extends App {
val pos = Macros.pos {
^^
1
}
println(pos)
}

// Expecting
object Test extends App {
val pos = Macros.pos {
^
1
}
^
println(pos)
}


### StackOverflow

#### Collecting data from nested case classes using Generic

Is it possible to provide a generic function which would traverse an arbitrary case class hierarchy and collect information from selected fields? In the following snippet, such fields are encoded as Thing[T].

The snippet works fine for most scenarios. The only problem is when Thing wraps a type class (e.g. List[String]) and such field is nested deeper in the hierarchy; when it is on the top level, it works fine.

import shapeless.HList._
import shapeless._
import shapeless.ops.hlist.LeftFolder

case class Thing[T](t: T) {
def info: String = ???
}

trait Collector[T] extends (T => Seq[String])

object Collector extends LowPriority {
implicit def forThing[T]: Collector[Thing[T]] = new Collector[Thing[T]] {
override def apply(thing: Thing[T]): Seq[String] = thing.info :: Nil
}
}

trait LowPriority {
object Fn extends Poly2 {
implicit def caseField[T](implicit c: Collector[T]) =
at[Seq[String], T]((acc, t) => acc ++ c(t))
}

implicit def forT[T, L <: HList](implicit g: Generic.Aux[T, L],
f: LeftFolder.Aux[L, Seq[String], Fn.type, Seq[String]]): Collector[T] =
new Collector[T] {
override def apply(t: T): Seq[String] = g.to(t).foldLeft[Seq[String]](Nil)(Fn)
}
}

object Test extends App {
case class L1(a: L2)
case class L2(b: Thing[List[String]])

implicitly[Collector[L2]] // works fine
implicitly[Collector[L1]] // won't compile
}


### StackOverflow

#### Clojure - difference between quote and syntax quote

(def x 1)
user=> '~x
x
user=> '~x
(quote 1)


Can anyone explain please how it is evaluated step by step?

### /r/emacs

#### I want to create a minor mode for Python and mark imports, where do I start

Edit: Typo in the title, I meant major mode :/

As the title might have spoiled, I'm trying to write a major mode for self-education purposes. Might as well do something productive so here is the plan:

• Activate the mode instead of python-mode for Python files
• Inherit from python-mode (read "keep python-mode behavior such as faces, colors, indent, completion)
• Then add in features, learning elisp at the same time (first in mind is marking unused imports)

I wanted at first to simply write a minor mode to mark unused imports, but I was thinking I would then want to add more and more features to my Emacs while learning Elisp.

I'm not really familiar with Elisp but here is what I've wrote so far. I noticed I was creating a whole new mode and don't know how to simply inherit from python-mode (if that makes sense). I believed I could ask for directions here : ]

(defvar yapm-font-lock-defaults nil "Value for font-lock-defaults.") (setq yapm-font-lock-defaults '(("import \$$.+\$$" . (1 font-lock-import-face)) ("from \$$.+\$$ import \$$.+\$$" . (1 font-lock-import-from-face)) ) ) (define-derived-mode yapm-mode python-mode (setq font-lock-defaults '(yapm-font-lock-defaults)) (setq mode-name "yapm") ) 

Slightly out of context, I feel the ident levels on the first code block are wrong, (emacs-lisp-mode) but can't tell why it's indented that way.

Any help/directions are welcome : ] Thanks !

submitted by TheFrenchPoulp

### StackOverflow

#### scala-library.jar version in sbt published artifacts

As Scala 2.10.1 is coming out soon, I believe, I want to make sure that artifacts I publish now will automatically work with a scala-library.jar of that version. I use sbt 0.12.2 to publish, and with a setting of

scalaVersion := "2.10.0"


I get correctly attached the binary compatible version to my artifact, e.g.

<artifactId>mylibrary_2.10</artifactId>


...but the scala library dependency still says 2.10.0:

     <dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>2.10.0</version> <!-- !!! -->
</dependency>


I suppose that is not correct, and it should use 2.10 or 2.10.+ here?

I also tried to add scalaBinaryVersion := "2.10" but that doesn't seem to change anything.

Another idea I had was to use scalaVersion := "2.10.+". Sbt takes forever with Getting Scala 2.10.+ ..., but it finally goes on fine and the pom has this version now for scala-library.jar. So maybe this is the correct way?

#### Why spy has strange behavior with a method which has => Any argument?

See my specs2 test:

import org.specs2.mutable.Specification
import org.specs2.mock.Mockito

class MyDialogSpec extends Specification with Mockito {

"My dialog" should {
"do something later" in {
val dialog = spy(new Dialog)
dialog.doAction()
1 === 1
}
}

class Dialog {

def doAction() {
invokeLater {
println("######## invokeLater")
}
}

def invokeLater(f: => Any) = f
}

}


Two things to notice:

1. The method invokeLater(f: => Any) has a => Any type parameter
2. I used spy on spy(new Dialog)

When I run this test, the printing is very strange:

######## invokeLater
######## invokeLater
######## invokeLater


You can see it printed 3 times (I expect it only 1 time)!

If I don't use spy, just val dialog = new Dialog, it will just print one time.

Do I miss anything?

### Planet Theory

#### Guest Post by Dr. Hajiaghayi: A new way to rank departments

(This is a guest post by MohammadTaghi Hajiaghayi. His name is not a typo- the first name really is MohammadTaghi.)

Due to our belief in the lack of transparency and well-defined measures in methods used by U.S News to rank CS departments in theoretical computer science (and in general), my PhD. student Saeed Seddighin and I have worked for several months to provide a ranking based on a real and measurable method of the number of papers in TCS for the top 50 US Universities. To make this possible, we gathered the information about universities from various resources. You may see the ranking and our exact methodology here.

Indeed  we have some initial rankings based on similar measures for computer science in general as well which we plan to release soon (we are still in  the process of double-checking or even triple-checking our data and our analysis due to several factors). CS theory ranking is our initial ranking release to get feedback at this point.

Please feel free to give us feedback (hajiagha@cs.umd.edu).

### /r/compsci

#### Comp Sci vs Comp Sci Engineering?

So this is just a quick question for anyone out there who does hiring in the field of computer science.

My school offers a Computer Science and Engineering major where you take a few computer engineering courses and receive a engineering degree vs a straight computer science major where you receive just a science degree. I'm currently in CSE but am considering making the change to CS since I just want to work in comp sci.

So basically does the engineering make you more favorable in the hiring process or will it not really matter?

submitted by thebluej3w

### Planet Clojure

#### Immutant 2 (The Deuce) Alpha2 Released

We're as happy as a cat getting vacuumed to announce our second alpha release of The Deuce, Immutant 2.0.0-alpha2.

Big, special thanks to all our early adopters who provided invaluable feedback on alpha1 and our incremental releases.

## What is Immutant?

Immutant is an integrated suite of Clojure libraries backed by Undertow for web, HornetQ for messaging, Infinispan for caching, Quartz for scheduling, and Narayana for transactions. Applications built with Immutant can optionally be deployed to a WildFly cluster for enhanced features. Its fundamental goal is to reduce the inherent incidental complexity in real world applications.

A few highlights of The Deuce compared to the previous 1.x series:

• It uses the Undertow web server -- it's much faster, with WebSocket support
• It's completely functional "embedded" in your app, i.e. no app server required
• It may be deployed to latest WildFly for extra clustering features

## What's changed in this release?

• Though not strictly part of the release, we've significantly rearranged our documentation. The "tutorials" are now called "guides", and we publish them right along with the apidoc. This gives us a "one-stop doc shop" with better, cross-referenced content.
• We've introduced an org.immutant/transactions library to provide support for XA distributed transactions, a feature we had in Immutant 1.x, but only recently made available in The Deuce, both within WildFly and out of the container as well. The API is similar, with a few minor namespace changes, and all Immutant caches and messaging destinations are XA capable.
• We're now exposing flexible SSL configuration options through our immutant.web.undertow namespace, allowing you to set up an HTTPS listener with some valid combination of SSLContext, KeyStore, TrustStore, KeyManagers, or TrustManagers.
• We've made a large, breaking change to our messaging API. Namely, we've removed the connection and session abstractions, and replaced them with a single one: context. This is somewhat motivated by our implementation using the new JMS 2.0 api's.
• Datomic can now be used with an Immutant application when inside of WildFly without having to modify the WildFly configuration or add any exclusions. Unfortunately, you still cannot use Datomic with an application that uses org.immutant/messaging outside of WildFly, due to conflicts between the HornetQ version we depend on and the version Datomic depends on. See IMMUTANT-497 for more details.
• HornetQ is now configured via standard configuration files instead of via static Java code, allowing you to alter that configuration if need be. See the messaging guide for details.

We've also released a new version of the lein-immutant plugin (2.0.0-alpha2). You'll need to upgrade to that release if you will use alpha2 of Immutant with WildFly.

For a full list of changes, see the issue list below.

## How to try it

If you're already familiar with Immutant 1.x, you should take a look at our migration guide. It's our attempt at keeping track of what we changed in the Clojure namespaces.

The guides are another good source of information, along with the rest of the apidoc.

For a working example, check out our Feature Demo application!

## Get It

There is no longer any "installation" step as there was in 1.x. Simply add the relevant dependency to your project as shown on Clojars.

## What's next?

We expect to release a beta fairly soon, once we ensure that everything works well with the upcoming WildFly 9 release.

## Get In Touch

If you have any questions, issues, or other feedback about Immutant, you can always find us on #immutant on freenode or our mailing lists.

## Issues resolved in 2.0.0-alpha2

• [IMMUTANT-466] - App using datomic can't find javax.net.ssl.SSLException class in WildFly
• [IMMUTANT-467] - Datomic HornetQ Conflicts with WildFly
• [IMMUTANT-473] - web/run only works at deployment inside wildfly
• [IMMUTANT-474] - See if we need to bring over any of the shutdown code from 1.x to use inside the container
• [IMMUTANT-475] - Write tutorial on overriding logging settings in-container
• [IMMUTANT-477] - Figure out how to get the web-context inside WildFly
• [IMMUTANT-478] - Consider wrapping scheduled jobs in bound-fn
• [IMMUTANT-479] - Get XA working in (and possibly out of) container
• [IMMUTANT-480] - Immutant running out of a container does not handle laptop suspend gracefully
• [IMMUTANT-481] - Expose way to set the global log level
• [IMMUTANT-482] - Destinations with leading slashes fail to deploy in WildFly
• [IMMUTANT-483] - Allow nil :body in ring response
• [IMMUTANT-484] - app-uri has a trailing slash
• [IMMUTANT-485] - The wunderboss-core jar file has a logback.xml file packaged inside of it which competes with a locally configured logback.xml
• [IMMUTANT-487] - Enable explicit control of an embedded web server
• [IMMUTANT-488] - Provide better SSL support than just through the Undertow.Builder
• [IMMUTANT-489] - Re-running servlets yields IllegalStateException
• [IMMUTANT-490] - Don't register fressian codec by default
• [IMMUTANT-491] - at-exit handlers can fail if they refer to any wboss components
• [IMMUTANT-492] - Expose HornetQ broker configuration options
• [IMMUTANT-493] - Revert back to :host instead of :interface for nrepl options
• [IMMUTANT-494] - Expose controlling the context mode to listen
• [IMMUTANT-496] - Expose way to override HornetQ data directories
• [IMMUTANT-498] - Replace connection and session with a single context abstraction
• [IMMUTANT-499] - Consider renaming :client-id on context to :subscription-name
• [IMMUTANT-500] - Throw if listen, queue, or topic is given a non-remote context
• [IMMUTANT-501] - Running the standalone JAR with default "/" context path requires extra slash for inner routes
• [IMMUTANT-502] - Rename caching/compare-and-swap! to swap-in!

#### Using Transit with Immutant 2

Out of the box, Immutant 2 has support for several data serialization strategies for use with messaging and caching, namely: EDN, Fressian, JSON, and none (which falls back to Java serialization). But what if you want to use another strategy? Luckily, this isn't a closed set - Immutant allows us to add new strategies. We took advantage of that and have created a separate project that brings Transit support to Immutant - immutant-transit.

## What is Transit?

From the Transit format page:

Transit is a format and set of libraries for conveying values between applications written in different programming languages.

It's similar in purpose to EDN, but leverages the speed of the optimized JSON readers that most platforms provide.

## What does immutant-transit offer over using Transit directly?

immutant-transit provides an Immutant codec for Transit that allows for transparent encoding and decoding of Transit data when using Immutant's messaging and caching functionality. Without it, you would need to set up the encode/decode logic yourself.

## Usage

Note: immutant-transit won't work with Immutant 2.0.0-alpha1 - you'll need to use an incremental build (#298 or newer).

First, we need to add org.immutant/immutant-transit to our application's dependencies:

        :dependencies [[org.clojure/clojure "1.6.0"]
[org.immutant/immutant "2.x.incremental.298"]
[org.immutant/immutant-transit "0.2.2"]]

If you don't have com.cognitect/transit-clj in your dependencies, immutant-transit will transitively bring in version 0.8.259. We've tested against 0.8.255 and 0.8.259, so if you're running another version and are seeing issues, let us know.

Now, we need to register the Transit codec with Immutant:

        (ns your.app
(:require [immutant.codecs.transit :as it]))

(it/register-transit-codec)

This will register a vanilla JSON Transit codec that encodes to a byte[] under the name :transit with the content-type application/transit+json (Immutant uses the content-type to identify the encoding for messages sent via HornetQ).

To use the codec, provide it as the :encoding option wherever an encoding is used:

        (immutant.messaging/publish some-queue {:a :message} :encoding :transit)

(def transit-cache (immutant.caching/with-codec some-cache :transit))
(immutant.caching/compare-and-swap! transit-cache a-key a-function)

If you need to change the underlying format that Transit uses, or need to provide custom read/write handlers, you can pass them as options to register-transit-codec:

        (it/register-transit-codec
:type :json-verbose
:write-handlers my-write-handlers)

The content-type will automatically be generated based on the :type, and will be of the form application/transit+<:type>.

You can also override the name and content-type:

        (it/register-transit-codec
:name :transit-with-my-handlers
:content-type "application/transit+json+my-stuff"
:write-handlers my-write-handlers)

For more examples, see the example project.

## Why is this a separate project from Immutant?

Transit's format and implementation are young, and are still in flux. We're currently developing this as a separate project so we can make releases independent of Immutant proper that track changes to Transit. Once Transit matures a bit, we'll likely roll this in to Immutant itself.

If you are interested in adding a codec of your own, take a look at the immutant-transit source and at the immutant.codecs namespace to see how it's done.

## Get In Touch

If you have any questions, issues, or other feedback about mmutant-transit, you can always find us on #immutant on freenode or our mailing lists.

### Dave Winer

#### What I want from a blogging platform

I want to be able to write down a short idea, one or two paragraphs, hit Publish (or the equivalent) and move on to the next thing.

When I publish it should...

3. The full text should be sent to Facebook or/or WordPress, including a link back to the original post. Revisions to the post flow to Facebook and WordPress.

4. Be included in my RSS feed, with full text.

The most important thing is it be quick. I lose good ideas because there's no place to put them, or if I put them on Facebook I'd lose them shortly after they scroll off (why is it so hard to find stuff on FB). I need to be able to open my outliner, hit the Big Plus, enter my idea and get back to what I was doing. Quick.

This is essential for my blog and also for my worknotes outline.

### StackOverflow

#### Functions in scala

I'm having hard time understanding what the following means in scala:

f: Int => Int


Is the a function? What is the difference between f: Int => Intand def f(Int => Int)?

Thanks

Having briefly looked at Haskell recently I wondered whether anybody could give a brief, succinct, practical explanation as to what a monad essentially is? I have found most explanations I've come across to be fairly inaccessible and lacking in practical detail, so could somebody here help me?

### TheoryOverflow

#### What does undecidable problem means in context of a modern programming language? [on hold]

Imagine following python code

x = raw_input()
print(x);


Does the problem of undecidability means that there can be possibly an input to this program such that the program runs forever?

#### Randomized identity-testing for high degree polynomials?

Let $f$ be an $n$-variate polynomial given as an arithmetic circuit of size poly$(n)$, and let $p = 2^{\Omega(n)}$ be a prime.

Can you test if $f$ is identically zero over $\mathbb{Z}_p$, with time $\mbox{poly}(n)$ and error probability $\leq 1-1/\mbox{poly}(n)$, even if the degree is not a priori bounded? What if $f$ is univariate?

Note that you can efficiently test if $f$ is identically zero as a formal expression, by applying Schwartz-Zippel over a field of size say $2^{2|f|}$, because the maximum degree of $f$ is $2^{|f|}$.

### Dave Winer

#### The NYT wants crowd-sourced

I want crowd-speaks.

Yes I am part of a crowd. Undifferentiated slurry of humanity, with collective wisdom and intelligence and blah blah blah etc etc.

What I want is my name under your masthead. And the same opportunity for anyone with integrity, an idea and a little expertise. The floodgates have been open for 15 to 20 years.

There's still an opportunity to make some of that stand out from the rest using the NYT brand. But you have to give up some or most of the elitism, but not the intelligence and integrity.

### /r/compsci

#### Are there any open questions in amorphous computing appropriate for an MSc thesis?

I'm searching for a thesis topic. I'm interested in multi-agent systems. I think of the amorphous computer as a collection of unreliable agents which live on a surface or volume, have limited processing power and memory and can communicate only locally. Are there any open questions that could be tackled by an MSc student like me in 9 months time?

submitted by suorm
Equality Problem We have two players, player 1 Alice who gets an $n$-bit vector $X$ and player 2 Bob who gets an $n$-bit vector $Y$. We want one of them to output $1$ if and only if for all indices $i:$ $X[i] = Y[i]$.
Suppose that I choose the inputs $X$ and $Y$ uniformly at random from the set $\{0,1\}^n$. Initially, Alice knows nothing about Bob's input $Y$. Consider a protocol $P$ for equality. Alice and Bob run $P$ and assume that $P$ outputs $1$. By the correctness of $P$, Alice knows that for each index $i$, $Y[i] = X[i]$. Thus Alice has learned $|Y|=n$ bits . Doesn't this immediately imply the $\Omega(n)$ lower bound for $P$?
Note: Of course, I know that the above argument must be flawed since we can solve equality with $O(\log n)$ bits when allowing randomization. I'm just trying to understand why my proof attempt is flawed.